Re: [HACKERS] postgres_fdw, remote triggers and schemas

2013-11-15 Thread Thom Brown
On 15 November 2013 21:03, Tom Lane t...@sss.pgh.pa.us wrote:
 Thom Brown t...@linux.com writes:
 Is this unintended, or is it something users should fix themselves by
 being explicit about relation schemas in trigger functions?  Should
 the schema search path instead pick up whatever the default would be
 for the user being used for the connection?

 postgres_fdw intentionally runs the remote session with a very minimal
 search_path (I think just pg_catalog, in fact).  I would argue that
 any trigger that breaks because of that is broken anyway, since it
 would fail --- possibly with security implications --- if some ordinary
 user modified the search path.

That makes sense.  Would it be worth putting a note in the
documentation about the behaviour of the search path on the
postgres_fdw page?

-- 
Thom


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning recovery.conf into GUCs

2013-11-15 Thread Josh Berkus
On 11/15/2013 06:38 AM, Jaime Casanova wrote:
  Please check for compiler warnings in pg_basebackup:
 
  pg_basebackup.c:1109:1: warning: ‘escapeConnectionParameter’ defined but 
  not used [-Wunused-function]
  pg_basebackup.c:1168:1: warning: ‘escape_quotes’ defined but not used 
  [-Wunused-function]
 
 those are functions that are no longer used but Josh considered they
 could become useful before release.
 i can put them inside #ifdef _NOT_USED_ decorations or just remove
 them now and if/when we find some use for them re add them

Wait, which Josh?  Not me ...

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-15 Thread David E. Wheeler
On Nov 15, 2013, at 12:37 PM, Andrew Dunstan and...@dunslane.net wrote:

 It's making my head hurt, to be honest, and it sounds like a recipe for years 
 and years of inconsistencies and bugs.
 
 I don't want to have two types, but I think I'd probably rather have two 
 clean types than this. I can't imagine it being remotely acceptable to have 
 behaviour depend in whether or not something was ever stored, which is what 
 this looks like.

I disklike having two types (no, three -- there is hstore, too!). But if there 
is consensus for it (and I am not at all convinced that there is at this 
point), I can live with it. Docs would have to be pretty explicit, though.

David



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-15 Thread Josh Berkus
On 11/15/2013 01:12 PM, David E. Wheeler wrote:
 On Nov 15, 2013, at 12:37 PM, Andrew Dunstan and...@dunslane.net wrote:
 
 It's making my head hurt, to be honest, and it sounds like a recipe for 
 years and years of inconsistencies and bugs.

 I don't want to have two types, but I think I'd probably rather have two 
 clean types than this. I can't imagine it being remotely acceptable to have 
 behaviour depend in whether or not something was ever stored, which is what 
 this looks like.
 
 I disklike having two types (no, three -- there is hstore, too!). But if 
 there is consensus for it (and I am not at all convinced that there is at 
 this point), I can live with it. Docs would have to be pretty explicit, 
 though.

I would be happy to do a survey on how common key ordering and/or
duplicate keys are in postgresql+json.  However, I'm not clear on what
set of survey responses would decide us in either direction.  Even as a
pool of one, Merlin's case is a pretty persuasive example ... and, as he
points out, there will be applications built around 9.3's JSON which
havent even been written yet.

I believe this was a danger we recognized when we added the JSON type,
including the possibility that a future binary type might need to be a
separate type due to compatibility issues.  The only sad thing is the
naming; it would be better for the new type to carry the JSON name in
the future, but there's no way to make that work that I can think of.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-15 Thread David Johnston
Merlin Moncure-2 wrote
 I don't want to have two types, but I think I'd probably rather have two
 clean types than this. I can't imagine it being remotely acceptable to
 have
 behaviour depend in whether or not something was ever stored, which is
 what
 this looks like.
 
 Well, maybe so.  My main gripe with the 'two types' solutions is that:
 1) current type is already in core (that is, not an extension). In
 hindsight, I think this was a huge mistake.
 2) current type has grabbed the 'json' type name and the 'json_xxx' API.
 3) current type is getting used all over the place
 
 'Two types' means that (AIUI) you can't mess around with the existing
 API too much. And the new type (due out in 2016?) will be something of
 a second citizen.  The ramifications of dealing with the bifurcation
 is what makes *my* head hurt.  Every day the json stuff is getting
 more and more widely adopted.  9.4 isn't going to drop until 2014 best
 case and it won't be widely deployed in the enterprise until 2015 and
 beyond.  So you're going to have a huge code base operating on the
 'legacy' json type.
 
 merlin

The current type can store the exact same data as what a hash-like type
could store.  It can also store stuff a hash-like type would not be able to
store.  From my reading the main reason for adding the new hash-like type
would be to increase the performance characteristics of using said type. So:

1) if reasonable performance can be had with the current type the new type
would be unnecessary
2) if #1 is not possible then the new type trades of leniency in format for
performance improvements

One implication of #2 is that existing json that wants the improved
performance will need to undergo a full-table rewrite in order to be
converted.

Both output textual representations are identical and function overloading
and API should be able to maintained substantially identical between the two
types.

David J



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/additional-json-functionality-tp5777975p5778628.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-15 Thread k...@rice.edu
On Fri, Nov 15, 2013 at 01:18:22PM -0800, Josh Berkus wrote:
 
 I believe this was a danger we recognized when we added the JSON type,
 including the possibility that a future binary type might need to be a
 separate type due to compatibility issues.  The only sad thing is the
 naming; it would be better for the new type to carry the JSON name in
 the future, but there's no way to make that work that I can think of.
 
 -- 
 Josh Berkus
 PostgreSQL Experts Inc.
 http://pgexperts.com
 

What about a GUC for json version? Then you could choose and they
could both be call json.

Regards,
Ken


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-15 Thread Tom Lane
k...@rice.edu k...@rice.edu writes:
 On Fri, Nov 15, 2013 at 01:18:22PM -0800, Josh Berkus wrote:
 I believe this was a danger we recognized when we added the JSON type,
 including the possibility that a future binary type might need to be a
 separate type due to compatibility issues.  The only sad thing is the
 naming; it would be better for the new type to carry the JSON name in
 the future, but there's no way to make that work that I can think of.

 What about a GUC for json version? Then you could choose and they
 could both be call json.

GUCs that change user-visible semantics have historically proven to be
much less good ideas than they seem at first glance.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-15 Thread Andrew Dunstan


On 11/15/2013 04:53 PM, Tom Lane wrote:

k...@rice.edu k...@rice.edu writes:

On Fri, Nov 15, 2013 at 01:18:22PM -0800, Josh Berkus wrote:

I believe this was a danger we recognized when we added the JSON type,
including the possibility that a future binary type might need to be a
separate type due to compatibility issues.  The only sad thing is the
naming; it would be better for the new type to carry the JSON name in
the future, but there's no way to make that work that I can think of.

What about a GUC for json version? Then you could choose and they
could both be call json.

GUCs that change user-visible semantics have historically proven to be
much less good ideas than they seem at first glance.





Yeah, it would be a total foot gun here I think.

I've come to the conclusion that the only possible solution is to have a 
separate type. That's a bit sad, but there it is. The upside is that 
this will make the work Teodor has mentioned simpler. (Desperately 
making lemonade from lemons here.)



cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-15 Thread David E. Wheeler
On Nov 15, 2013, at 2:02 PM, Andrew Dunstan and...@dunslane.net wrote:

 Yeah, it would be a total foot gun here I think.
 
 I've come to the conclusion that the only possible solution is to have a 
 separate type. That's a bit sad, but there it is. The upside is that this 
 will make the work Teodor has mentioned simpler. (Desperately making lemonade 
 from lemons here.)

Fine. My bikeshedding: Call the new type jsonb. “B” for “binary.” Also, the 
old one is implicitly jsona. Get it?

David



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review:Patch: SSL: prefer server cipher order

2013-11-15 Thread Adrian Klaver

On 11/15/2013 11:49 AM, Marko Kreen wrote:

On Fri, Nov 15, 2013 at 11:16:25AM -0800, Adrian Klaver wrote:

The description of the GUCs show up in the documentation but I am
not seeing the GUCs themselves in postgresql.conf, so I could test
no further. It is entirely possible I am missing a step and would
appreciate enlightenment.


Sorry, I forgot to update sample config.

ssl-prefer-server-cipher-order-v2.patch
- Add GUC to sample config
- Change default value to 'true', per comments from Alvaro and Magnus.

ssl-ecdh-v2.patch
- Add GUC to sample config



Well that worked.
I made ssl connections to the server using psql and verified it 
respected the order of ssl_ciphers. I do not have a client available 
with a different view of cipher order so I cannot test that.


--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-15 Thread Hannu Krosing
On 11/15/2013 09:25 PM, Merlin Moncure wrote:
 On Fri, Nov 15, 2013 at 1:51 PM, David E. Wheeler da...@justatheory.com 
 wrote:
 On Nov 15, 2013, at 6:35 AM, Merlin Moncure mmonc...@gmail.com wrote:

 Here are the options on the table:
 1) convert existing json type to binary flavor (notwithstanding objections)
 2) maintain side by side types, one representing binary, one text.
 unfortunately, i think the text one must get the name 'json' due to
 unfortunate previous decision.
 3) merge the behaviors into a single type and get the best of both
 worlds (as suggested upthread).

 I think we need to take a *very* hard look at #3 before exploring #1
 or #2: Haven't through it through yet but it may be possible to handle
 this in such a way that will be mostly transparent to the end user and
 may have other benefits such as a faster path for serialization.
 If it’s possible to preserve order and still get the advantages of binary 
 representation --- which are substantial (see 
 http://theory.so/pg/2013/10/23/testing-nested-hstore/ and 
 http://theory.so/pg/2013/10/25/indexing-nested-hstore/ for a couple of 
 examples) --- without undue maintenance overhead, then great.

 I am completely opposed to duplicate key preservation in JSON, though. It 
 has caused us a fair number of headaches at $work.
Let's just  change the current json-constructing functions return type to
json_text which is exactly like text with 2 extra properties:

1) it is syntax-checked for valid json (that is it can be cast to json)

and

2) if included in outer json as data, it is included directly and is not
quoted like text


With just these two it should possible to have the following

a) Merlin and others can keep (ab)using json_text as this
wonderfully versatile format for feeding json parsers and
visualisers which accept duplicates and consider order significant

b) cast this to binary json object if de-duplication and fast access to
internals is needed

I do not think we need anything else for this

As far as I understand merlin is mostly ok with stored json being
normalised and the problem is just with constructing extended
json (a.k.a. processing instructions) to be used as source for
specialised parsers and renderers.

-- 
Hannu Krosing
PostgreSQL Consultant
Performance, Scalability and High Availability
2ndQuadrant Nordic OÜ



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-15 Thread Merlin Moncure
 On Fri, Nov 15, 2013 at 4:31 PM, Hannu Krosing ha...@2ndquadrant.com wrote:
 I think we need to take a *very* hard look at #3 before exploring #1
 or #2: Haven't through it through yet but it may be possible to handle
 this in such a way that will be mostly transparent to the end user and
 may have other benefits such as a faster path for serialization.
 If it’s possible to preserve order and still get the advantages of binary 
 representation --- which are substantial (see 
 http://theory.so/pg/2013/10/23/testing-nested-hstore/ and 
 http://theory.so/pg/2013/10/25/indexing-nested-hstore/ for a couple of 
 examples) --- without undue maintenance overhead, then great.

 I am completely opposed to duplicate key preservation in JSON, though. It 
 has caused us a fair number of headaches at $work.
 Let's just  change the current json-constructing functions return type to
 json_text which is exactly like text with 2 extra properties:

 1) it is syntax-checked for valid json (that is it can be cast to json)

 and

 2) if included in outer json as data, it is included directly and is not
 quoted like text


 With just these two it should possible to have the following

 a) Merlin and others can keep (ab)using json_text as this
 wonderfully versatile format for feeding json parsers and
 visualisers which accept duplicates and consider order significant

 b) cast this to binary json object if de-duplication and fast access to
 internals is needed

 I do not think we need anything else for this

I think you may be on to something here.  This might also be a way
opt-in to fast(er) serialization (upthread it was noted this is
unimportant; I'm skeptical).  I deeply feel that two types is not the
right path but I'm pretty sure that this can be finessed.

 As far as I understand merlin is mostly ok with stored json being
 normalised and the problem is just with constructing extended
 json (a.k.a. processing instructions) to be used as source for
 specialised parsers and renderers.

yes, this is correct.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump insert with column names speedup

2013-11-15 Thread Tom Lane
David Rowley dgrowle...@gmail.com writes:
 The tailing white space is fixed in the attached patch.

Applied with minor cosmetic adjustments.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-15 Thread Josh Berkus
On 11/15/2013 02:59 PM, Merlin Moncure wrote:
  On Fri, Nov 15, 2013 at 4:31 PM, Hannu Krosing ha...@2ndquadrant.com wrote:
 I think you may be on to something here.  This might also be a way
 opt-in to fast(er) serialization (upthread it was noted this is
 unimportant; I'm skeptical).  I deeply feel that two types is not the
 right path but I'm pretty sure that this can be finessed.
 
 As far as I understand merlin is mostly ok with stored json being
 normalised and the problem is just with constructing extended
 json (a.k.a. processing instructions) to be used as source for
 specialised parsers and renderers.

Thing is, I'm not particularly concerned about *Merlin's* specific use
case, which there are ways around. What I am concerned about is that we
may have users who have years of data stored in JSON text fields which
won't survive an upgrade to binary JSON, because we will stop allowing
certain things (ordering, duplicate keys) which are currently allowed in
those columns.  At the very least, if we're going to have that kind of
backwards compatibilty break we'll want to call the new version 10.0.

That's why naming old JSON as json_text won't work; it'll be a
hardened roadblock to upgrading.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum_work_mem

2013-11-15 Thread Peter Geoghegan
On Mon, Oct 21, 2013 at 6:44 AM, Magnus Hagander mag...@hagander.net wrote:
 +1. If changing at all, then maybe just autovacuum_mem?

I would like to stick with autovacuum_work_mem - it reflects the fact
that it's a setting shadowed by maintenance_work_mem, without being
too verbose.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extra functionality to createuser

2013-11-15 Thread Christopher Browne
On Fri, Nov 15, 2013 at 3:14 PM, Peter Eisentraut pete...@gmx.net wrote:
 On 11/14/13, 4:35 PM, Christopher Browne wrote: On Thu, Nov 14, 2013 at
 5:41 AM, Sameer Thakur samthaku...@gmail.com wrote:
 So i think -g option is failing

 Right you are.

 I was missing a g: in the getopt_long() call.

 Attached is a revised patch that handles that.


 src/bin/scripts/createuser.c:117: indent with spaces.
 +   case 'g':
 src/bin/scripts/createuser.c:118: indent with spaces.
 +   roles = pg_strdup(optarg);

OK, I ran pgindent on createuser.c, which leads to the Next Patch...

-- 
When confronted by a difficult problem, solve it by reducing it to the
question, How would the Lone Ranger handle this?
diff --git a/doc/src/sgml/ref/createuser.sgml b/doc/src/sgml/ref/createuser.sgml
index 2f1ea2f..5fedc80 100644
--- a/doc/src/sgml/ref/createuser.sgml
+++ b/doc/src/sgml/ref/createuser.sgml
@@ -131,6 +131,16 @@ PostgreSQL documentation
  /varlistentry
 
  varlistentry
+  termoption-g//term
+  termoption--roles//term
+  listitem
+   para
+Indicates roles to associate with this role.
+   /para
+  /listitem
+ /varlistentry
+
+ varlistentry
   termoption-i//term
   termoption--inherit//term
   listitem
diff --git a/src/bin/scripts/createuser.c b/src/bin/scripts/createuser.c
index d1542d9..e2e1134 100644
--- a/src/bin/scripts/createuser.c
+++ b/src/bin/scripts/createuser.c
@@ -47,6 +47,7 @@ main(int argc, char *argv[])
{pwprompt, no_argument, NULL, 'P'},
{encrypted, no_argument, NULL, 'E'},
{unencrypted, no_argument, NULL, 'N'},
+   {roles, required_argument, NULL, 'g'},
{NULL, 0, NULL, 0}
};
 
@@ -57,6 +58,7 @@ main(int argc, char *argv[])
char   *host = NULL;
char   *port = NULL;
char   *username = NULL;
+   char   *roles = NULL;
enum trivalue prompt_password = TRI_DEFAULT;
boolecho = false;
boolinteractive = false;
@@ -83,7 +85,7 @@ main(int argc, char *argv[])
 
handle_help_version_opts(argc, argv, createuser, help);
 
-   while ((c = getopt_long(argc, argv, h:p:U:wWedDsSaArRiIlLc:PEN,
+   while ((c = getopt_long(argc, argv, h:p:U:g:wWedDsSaArRiIlLc:PEN,
long_options, 
optindex)) != -1)
{
switch (c)
@@ -112,6 +114,9 @@ main(int argc, char *argv[])
case 'D':
createdb = TRI_NO;
break;
+   case 'g':
+   roles = pg_strdup(optarg);
+   break;
case 's':
case 'a':
superuser = TRI_YES;
@@ -302,6 +307,8 @@ main(int argc, char *argv[])
appendPQExpBuffer(sql,  NOREPLICATION);
if (conn_limit != NULL)
appendPQExpBuffer(sql,  CONNECTION LIMIT %s, conn_limit);
+   if (roles != NULL)
+   appendPQExpBuffer(sql,  IN ROLE %s, roles);
appendPQExpBuffer(sql, ;\n);
 
if (echo)
@@ -334,6 +341,7 @@ help(const char *progname)
printf(_(  -D, --no-createdb role cannot create databases 
(default)\n));
printf(_(  -e, --echoshow the commands being sent to 
the server\n));
printf(_(  -E, --encrypted   encrypt stored password\n));
+   printf(_(  -g, --roles   roles to associate with this new 
role\n));
printf(_(  -i, --inherit role inherits privileges of roles 
it is a\n
 member of (default)\n));
printf(_(  -I, --no-inherit  role does not inherit 
privileges\n));

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improve code in tidbitmap.c

2013-11-15 Thread Tom Lane
Etsuro Fujita fujita.ets...@lab.ntt.co.jp writes:
 I'd like to do the complementary explanation of this.
 In tbm_union_page() and tbm_intersect_page() in tidbitmap.c, WORDS_PER_PAGE
 is used in the scan of a lossy chunk, instead of WORDS_PER_CHUNK, where
 these macros are defined as:

 /* number of active words for an exact page: */
 #define WORDS_PER_PAGE  ((MAX_TUPLES_PER_PAGE - 1) / BITS_PER_BITMAPWORD +
 1)
 /* number of active words for a lossy chunk: */
 #define WORDS_PER_CHUNK  ((PAGES_PER_CHUNK - 1) / BITS_PER_BITMAPWORD + 1)

 Though the use of WORDS_PER_PAGE in the scan of a lossy chunk is logically
 correct since these macros implicitly satisfy that WORDS_PER_CHUNK 
 WORDS_PER_PAGE, I think WORDS_PER_CHUNK should be used in the scan of a
 lossy chunk for code readability and maintenance.  So, I submitted a patch
 working in such a way in an earlier email.

This is a bug fix, not a performance improvement (any improvement would
be welcome, but that's not the point).  There's absolutely nothing
guaranteeing that WORDS_PER_CHUNK is less than WORDS_PER_PAGE, and if
it were the other way around, the code would be outright broken.  It's
pure luck that it was merely inefficient.

Committed, thanks for finding it!

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strncpy is not a safe version of strcpy

2013-11-15 Thread David Rowley
On Sat, Nov 16, 2013 at 4:09 AM, Alvaro Herrera alvhe...@2ndquadrant.comwrote:

 David Rowley escribió:
  On Fri, Nov 15, 2013 at 12:33 PM, Tomas Vondra t...@fuzzy.cz wrote:

   Be careful with 'Name' data type - it's not just a simple string
 buffer.
   AFAIK it needs to work with hashing etc. so the zeroing is actually
 needed
   here to make sure two values produce the same result. At least that's
 how
   I understand the code after a quick check - for example this is from
 the
   same jsonfuncs.c you mentioned:
  
   memset(fname, 0, NAMEDATALEN);
   strncpy(fname, NameStr(tupdesc-attrs[i]-attname), NAMEDATALEN);
   hashentry = hash_search(json_hash, fname, HASH_FIND, NULL);
  
   So the zeroing is on purpose, although if strncpy does that then the
   memset is probably superflous.

 This code should probably be using namecpy().  Note namecpy() doesn't
 memset() after strncpy() and has survived the test of time, which
 strongly suggests that the memset is indeed superfluous.


I went on a bit of a strncpy cleanup rampage this morning and ended up
finding quite a few places where strncpy is used wrongly.
I'm not quite sure if I have got them all in this patch, but I' think I've
got the obvious ones at least.

For the hash_search in jsconfuncs.c after thinking about it a bit more...
Can we not just pass the attname without making a copy of it? I see keyPtr
in hash_search is const void * so it shouldn't get modified in there. I
can't quite see the reason for making the copy.

Attached is a patch with various cleanups where I didn't like the look of
the strncpy. I didn't go overboard with this as I know making this sort of
small changes all over can be a bit scary and I thought maybe it would get
rejected on that basis.

I also cleaned up things like strncpy(dest, src, strlen(src)); which just
seems a bit weird and I'm failing to get my head around why it was done. I
replaced these with memcpy instead, but they could perhaps be a plain old
strcpy.

Regards

David Rowley



 --
 Álvaro Herrerahttp://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services



strncpy_cleanup_v0.1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-15 Thread David Johnston
Josh Berkus wrote
 On 11/15/2013 02:59 PM, Merlin Moncure wrote:
  On Fri, Nov 15, 2013 at 4:31 PM, Hannu Krosing lt;

 hannu@

 gt; wrote:
 I think you may be on to something here.  This might also be a way
 opt-in to fast(er) serialization (upthread it was noted this is
 unimportant; I'm skeptical).  I deeply feel that two types is not the
 right path but I'm pretty sure that this can be finessed.
 
 As far as I understand merlin is mostly ok with stored json being
 normalised and the problem is just with constructing extended
 json (a.k.a. processing instructions) to be used as source for
 specialised parsers and renderers.
 
 Thing is, I'm not particularly concerned about *Merlin's* specific use
 case, which there are ways around. What I am concerned about is that we
 may have users who have years of data stored in JSON text fields which
 won't survive an upgrade to binary JSON, because we will stop allowing
 certain things (ordering, duplicate keys) which are currently allowed in
 those columns.  At the very least, if we're going to have that kind of
 backwards compatibilty break we'll want to call the new version 10.0.
 
 That's why naming old JSON as json_text won't work; it'll be a
 hardened roadblock to upgrading.

Agreed.  I can't imagine a use-case that would warrant breaking the current
behavior of json.  Either we live with just one, text-oriented, json type
and finesse whatever performance gains we can without breaking
compatibility; or we introduce additional types (I personally like adding 2
instead of one but just adding the binary one would be ok) which - barring
an overwhelming desire by -core to group-self-flagellate - means giving the
new type an as yet unused name.

From a marketing perspective having 3 types with the following properties is
an easy message to sell:

1) json - liberal interpretation w/ validation only; stored as text; output
as-is
2) json_text - strict interpretation w/ validation only; stored as text;
output as-is
3) json_binary - strict interpretation w/ validation  parsing; stored as
binary; output normalized

This way json seems less like a mistake but rather an intentional desire
to introduce a liberal type that meets data exchange needs in the short term
and now, later, a structured data storage mechanism similar to hstore.

Even if you have json_binary I can imaging that some people would want to be
able to store the original strict json as-is.  Sure, they can use text, but
this way intent is made clear and validation is attached directly to the
type as opposed to having to be done separately.  The use-cases described
for needing a liberal json prove this out.  That said json would be an
acceptable replacement for json_text in many cases and separate validation
for strict json prior to storing into json isn't that heinous.

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/additional-json-functionality-tp5777975p5778655.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-15 Thread David Johnston
Looking at this a different way: could we just implement BSON and leave json
alone?

http://bsonspec.org/

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/additional-json-functionality-tp5777975p5778656.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-15 Thread Josh Berkus
On 11/15/2013 04:00 PM, David Johnston wrote:
 Looking at this a different way: could we just implement BSON and leave json
 alone?
 
 http://bsonspec.org/

In short?  No.

For one thing, our storage format is different from theirs (better,
frankly), and as a result is not compliant with their standard.

That's a reason why we won't use the name BSON, either, since it's a
trademark of 10gen.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-15 Thread Andrew Dunstan


On 11/15/2013 07:32 PM, Josh Berkus wrote:

On 11/15/2013 04:00 PM, David Johnston wrote:

Looking at this a different way: could we just implement BSON and leave json
alone?

http://bsonspec.org/

In short?  No.

For one thing, our storage format is different from theirs (better,
frankly), and as a result is not compliant with their standard.

That's a reason why we won't use the name BSON, either, since it's a
trademark of 10gen.




What is more, it has restrictions which we do not wish to have. See for 
example its treatment of numerics.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] The number of character limitation of custom script on pgbench

2013-11-15 Thread Tom Lane
Sawada Masahiko sawada.m...@gmail.com writes:
 I attached the patch which solves this problem, and have submitted to CF3.
 I changed how to get the SQL from custom script file.

This needed a bit of work:

- Use of strncat didn't seem particularly safe, or efficient.  I changed
it to explicitly account for space consumption in the result buffer.
- It leaked the buffer space used.  While this likely doesn't matter for
foreseeable usage, it seemed worth fixing.
- Didn't do the right thing for a file not ending with a newline.
- Didn't follow project code layout standards.

I've committed the attached revised version.

regards, tom lane

diff --git a/contrib/pgbench/pgbench.c b/contrib/pgbench/pgbench.c
index fff71e5..2c96fae 100644
*** a/contrib/pgbench/pgbench.c
--- b/contrib/pgbench/pgbench.c
*** process_commands(char *buf)
*** 2016,2021 
--- 2016,2064 
  	return my_commands;
  }
  
+ /*
+  * Read a line from fd, and return it in a malloc'd buffer.
+  * Return NULL at EOF.
+  *
+  * The buffer will typically be larger than necessary, but we don't care
+  * in this program, because we'll free it as soon as we've parsed the line.
+  */
+ static char *
+ read_line_from_file(FILE *fd)
+ {
+ 	char		tmpbuf[BUFSIZ];
+ 	char	   *buf;
+ 	size_t		buflen = BUFSIZ;
+ 	size_t		used = 0;
+ 
+ 	buf = (char *) palloc(buflen);
+ 	buf[0] = '\0';
+ 
+ 	while (fgets(tmpbuf, BUFSIZ, fd) != NULL)
+ 	{
+ 		size_t		thislen = strlen(tmpbuf);
+ 
+ 		/* Append tmpbuf to whatever we had already */
+ 		memcpy(buf + used, tmpbuf, thislen + 1);
+ 		used += thislen;
+ 
+ 		/* Done if we collected a newline */
+ 		if (thislen  0  tmpbuf[thislen - 1] == '\n')
+ 			break;
+ 
+ 		/* Else, enlarge buf to ensure we can append next bufferload */
+ 		buflen += BUFSIZ;
+ 		buf = (char *) pg_realloc(buf, buflen);
+ 	}
+ 
+ 	if (used  0)
+ 		return buf;
+ 
+ 	/* Reached EOF */
+ 	free(buf);
+ 	return NULL;
+ }
+ 
  static int
  process_file(char *filename)
  {
*** process_file(char *filename)
*** 2024,2030 
  	Command   **my_commands;
  	FILE	   *fd;
  	int			lineno;
! 	char		buf[BUFSIZ];
  	int			alloc_num;
  
  	if (num_files = MAX_FILES)
--- 2067,2073 
  	Command   **my_commands;
  	FILE	   *fd;
  	int			lineno;
! 	char	   *buf;
  	int			alloc_num;
  
  	if (num_files = MAX_FILES)
*** process_file(char *filename)
*** 2046,2056 
  
  	lineno = 0;
  
! 	while (fgets(buf, sizeof(buf), fd) != NULL)
  	{
  		Command*command;
  
  		command = process_commands(buf);
  		if (command == NULL)
  			continue;
  
--- 2089,2102 
  
  	lineno = 0;
  
! 	while ((buf = read_line_from_file(fd)) != NULL)
  	{
  		Command*command;
  
  		command = process_commands(buf);
+ 
+ 		free(buf);
+ 
  		if (command == NULL)
  			continue;
  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] review: autovacuum_work_mem

2013-11-15 Thread Nigel Heron
On Sun, Oct 20, 2013 at 7:21 AM, Magnus Hagander mag...@hagander.net wrote:
 On Sun, Oct 20, 2013 at 2:22 AM, Peter Geoghegan p...@heroku.com wrote:

 It seemed neater to me to create a new flag, so that in principle any
 vacuum() code path can request autovacuum_work_mem, rather than having
 lazyvacuum.c code call IsAutoVacuumWorkerProcess() for the same
 purpose. To date, that's only been done within vacuumlazy.c for things
 like logging.


But I'd suggest just a:
int vac_work_mem = (IsAutoVacuumWorkerProcess()  autovacuum_work_mem
!= -1) ? autovacuum_work_mem : maintenance_work_mem;

and not sending around a boolean flag through a bunch of places when
it really means just the same thing,

I agree with Magnus here, calling IsAutoVacuumWorkerProcess() seems
cleaner than the new flag and the function parameter changes.


 Also, isn't this quite confusing:
 + # Note:  autovacuum only prefers autovacuum_work_mem over 
 maintenance_work_mem
 + #autovacuum_work_mem = -1 # min 1MB, or -1 to disable

 Where does the only come from? And we don't really use the term
 prefers over anywhere else there. And -1 doesn't actually disable
 it. I suggest following the pattern of the other parameters that work
 the same way, which would then just be:

 #autovacuum_work_mem = -1  # amount of memory for each autovacuum
 worker. -1 means use maintenance_work_mem


 +1

here's my review of the v1 patch,

patch features tested:
- regular VACUUM * commands ignore autovacuum_work_mem.
- if autovacuum_work_mem = -1 then maintenance_work_mem is used by autovacuum.
- if autovacuum_work_mem is set then it is used instead of
maintenance_work_mem by autovacuum.
- the autovacuum_work_mem guc has a sighup context and the global
variable used in the code is correctly refreshed during a reload.
- a 1MB lower limit for autovacuum_work_mem is enforced.

build (platform is fedora 18):
- patch (context format) applies to current HEAD with offsets (please rebase).
- documentation patches included.
- make doesn't produce any extra warnings.
- make check passes all tests (no extra regression tests).

questions/comments:
- should the category of the guc be RESOURCES_MEM (as in the patch)
or AUTOVACUUM? seems to fit in both, but it's really autovacuum
specific.
- could you also add documentation to the autovacuum section of
maintenance.sgml? I think people tuning their autovacuum are likely to
look there for guidance.
- could you update the comments at the top of vacuumlazy.c to reflect
this new feature?

-nigel.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] -d option for pg_isready is broken

2013-11-15 Thread Fabrízio de Royes Mello
On Wed, Nov 13, 2013 at 9:37 PM, Josh Berkus j...@agliodbs.com wrote:


 handyrep@john:~/handyrep$ pg_isready --version
 pg_isready (PostgreSQL) 9.3.1

 handyrep@john:~/handyrep$ pg_isready -h john -p 5432 -U postgres -d
 postgres -q
 pg_isready: missing = after postgres in connection info string

 handyrep@john:~/handyrep$ pg_isready --host=john --port=5432
 --user=postgres --dbname=postgres
 pg_isready: missing = after postgres in connection info string

 handyrep@john:~/handyrep$ pg_isready -h john -p 5432 -U postgres
 john:5432 - accepting connections

 so apparently the -d option:

 a) doesn't work, and
 b) doesn't do anything

 I suggest simply removing it from the utility.

 I'll note that the -U option doesn't appear to do anything relevant
 either, but at least it doesn't error unnecessarily:

 handyrep@john:~/handyrep$ pg_isready -h john -p 5432 -U no_such_user
 john:5432 - accepting connections


The attached patch fix it.

Regards,

-- 
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Timbira: http://www.timbira.com.br
 Blog sobre TI: http://fabriziomello.blogspot.com
 Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello
diff --git a/src/bin/scripts/pg_isready.c b/src/bin/scripts/pg_isready.c
index d27ccea..7568df5 100644
--- a/src/bin/scripts/pg_isready.c
+++ b/src/bin/scripts/pg_isready.c
@@ -130,7 +130,7 @@ main(int argc, char **argv)
 	/*
 	 * Get the host and port so we can display them in our output
 	 */
-	if (pgdbname)
+	if (pgdbname  strchr(pgdbname, '=') != NULL)
 	{
 		opts = PQconninfoParse(pgdbname, errmsg);
 		if (opts == NULL)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pre-commit triggers

2013-11-15 Thread Peter Eisentraut
On Fri, 2013-11-15 at 13:01 -0500, Andrew Dunstan wrote:
 Attached is a patch to provide a new event trigger that will fire on 
 transaction commit.

xact.c: In function ‘CommitTransaction’:
xact.c:1835:3: warning: implicit declaration of function 
‘PreCommitTriggersFire’ [-Wimplicit-function-declaration]




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pre-commit triggers

2013-11-15 Thread Andrew Dunstan


On 11/15/2013 09:07 PM, Peter Eisentraut wrote:

On Fri, 2013-11-15 at 13:01 -0500, Andrew Dunstan wrote:

Attached is a patch to provide a new event trigger that will fire on
transaction commit.

xact.c: In function ‘CommitTransaction’:
xact.c:1835:3: warning: implicit declaration of function 
‘PreCommitTriggersFire’ [-Wimplicit-function-declaration]





Oops. missed a #include. Revised patch attached.

cheers

andrew
diff --git a/doc/src/sgml/event-trigger.sgml b/doc/src/sgml/event-trigger.sgml
index ac31332..3bbf1a4 100644
--- a/doc/src/sgml/event-trigger.sgml
+++ b/doc/src/sgml/event-trigger.sgml
@@ -12,7 +12,7 @@
productnamePostgreSQL/ also provides event triggers.  Unlike regular
triggers, which are attached to a single table and capture only DML events,
event triggers are global to a particular database and are capable of
-   capturing DDL events.
+   capturing DDL events or transaction commits.
   /para
 
   para
@@ -29,8 +29,9 @@
  occurs in the database in which it is defined. Currently, the only
  supported events are
  literalddl_command_start/,
- literalddl_command_end/
- and literalsql_drop/.
+ literalddl_command_end/,
+ literalsql_drop/, and
+ literaltransaction_commit/.
  Support for additional events may be added in future releases.
/para
 
@@ -65,6 +66,15 @@
/para
 
para
+A literaltransaction_commit/ trigger is called at the end of a
+transaction, just before any deferred triggers are fired, unless
+no data changes have been made by the transaction, or
+productnamePostgreSQL/ is running in Single-User mode. This is so
+that you can recover from a badly specified literaltransaction_commit/
+trigger.
+   /para
+
+   para
  Event triggers (like other functions) cannot be executed in an aborted
  transaction.  Thus, if a DDL command fails with an error, any associated
  literalddl_command_end/ triggers will not be executed.  Conversely,
@@ -77,8 +87,13 @@
/para
 
para
- For a complete list of commands supported by the event trigger mechanism,
- see xref linkend=event-trigger-matrix.
+A literaltransaction_commit/ trigger is also not called in an
+aborted transaction.
+   /para
+
+   para
+ For a complete list of commands supported by the event trigger
+ mechanism, see xref linkend=event-trigger-matrix.
/para
 
para
@@ -101,6 +116,11 @@
  to intercept. A common use of such triggers is to restrict the range of
  DDL operations which users may perform.
/para
+
+   para
+literaltransaction_commit/ triggers do not currently support
+literalWHEN/literal clauses.
+   /para
   /sect1
 
   sect1 id=event-trigger-matrix
diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
index 0591f3f..e7f5095 100644
--- a/src/backend/access/transam/xact.c
+++ b/src/backend/access/transam/xact.c
@@ -30,6 +30,7 @@
 #include catalog/namespace.h
 #include catalog/storage.h
 #include commands/async.h
+#include commands/event_trigger.h
 #include commands/tablecmds.h
 #include commands/trigger.h
 #include executor/spi.h
@@ -1825,6 +1826,16 @@ CommitTransaction(void)
 	Assert(s-parent == NULL);
 
 	/*
+	 * First fire any pre-commit triggers, so if they in turn cause any
+	 * deferred triggers etc to fire this will be picked up below.
+	 * Only fire them, though, if we have a real transaction ID and
+	 * we're not running standalone. Not firing when standalone provides
+	 * a way to recover from setting up a bad transaction trigger.
+	 */
+	if (s-transactionId != InvalidTransactionId  IsUnderPostmaster)
+		PreCommitTriggersFire();
+
+	/*
 	 * Do pre-commit processing that involves calling user-defined code, such
 	 * as triggers.  Since closing cursors could queue trigger actions,
 	 * triggers could open cursors, etc, we have to keep looping until there's
@@ -2030,6 +2041,16 @@ PrepareTransaction(void)
 	Assert(s-parent == NULL);
 
 	/*
+	 * First fire any pre-commit triggers, so if they in turn cause any
+	 * deferred triggers etc to fire this will be picked up below.
+	 * Only fire them, though, if we have a real transaction ID and
+	 * we're not running standalone. Not firing when standalone provides
+	 * a way to recover from setting up a bad transaction trigger.
+	 */
+	if (s-transactionId != InvalidTransactionId  IsUnderPostmaster)
+		PreCommitTriggersFire();
+
+	/*
 	 * Do pre-commit processing that involves calling user-defined code, such
 	 * as triggers.  Since closing cursors could queue trigger actions,
 	 * triggers could open cursors, etc, we have to keep looping until there's
diff --git a/src/backend/commands/event_trigger.c b/src/backend/commands/event_trigger.c
index 328e2a8..f93441f 100644
--- a/src/backend/commands/event_trigger.c
+++ b/src/backend/commands/event_trigger.c
@@ -153,7 +153,8 @@ CreateEventTrigger(CreateEventTrigStmt *stmt)
 	/* Validate event name. */
 	if (strcmp(stmt-eventname, ddl_command_start) != 0 
 	

Re: ALTER SYSTEM SET command to change postgresql.conf parameters (RE: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review])

2013-11-15 Thread Amit Kapila
On Fri, Nov 15, 2013 at 10:18 PM, Peter Eisentraut pete...@gmx.net wrote:
 On 11/14/13, 2:50 AM, Amit Kapila wrote:
 Find the rebased version attached with this mail.

 Doesn't build:

 openjade  -wall -wno-unused-param -wno-empty -wfully-tagged -D . -D . -c 
 /usr/share/sgml/docbook/stylesheet/dsssl/modular/catalog -d stylesheet.dsl -t 
 sgml -i output-html -V html-index postgres.sgml
 openjade:reference.sgml:61:3:E: cannot find alter_system.sgml; tried 
 ref/alter_system.sgml, ./alter_system.sgml, ./alter_system.sgml, 
 /usr/local/share/sgml/alter_system.sgml, /usr/share/sgml/alter_system.sgml
 openjade:config.sgml:164:27:X: reference to non-existent ID SQL-ALTERSYSTEM
 make[3]: *** [HTML.index] Error 1
 make[3]: *** Deleting file `HTML.index'
 osx -D. -x lower -i include-xslt-index postgres.sgml postgres.xmltmp
 osx:reference.sgml:61:3:E: cannot find alter_system.sgml; tried 
 ref/alter_system.sgml, ./alter_system.sgml, 
 /usr/local/share/sgml/alter_system.sgml, /usr/share/sgml/alter_system.sgml
 osx:config.sgml:164:27:X: reference to non-existent ID SQL-ALTERSYSTEM
 make[3]: *** [postgres.xml] Error 1

 New file missing in patch?

Oops, missed the new sgml file in patch, updated patch to include it.
Many thanks for checking it.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


alter_system_v11.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New option for pg_basebackup, to specify a different directory for pg_xlog

2013-11-15 Thread Haribabu kommi
On 15 November 2013 17:26 Magnus Hagander wrote:
On Fri, Nov 15, 2013 at 12:10 PM, Haribabu kommi 
haribabu.ko...@huawei.commailto:haribabu.ko...@huawei.com wrote:
On 14 November 2013 23:59 Fujii Masao wrote:
 On Thu, Nov 14, 2013 at 9:08 PM, Haribabu kommi
 haribabu.ko...@huawei.commailto:haribabu.ko...@huawei.com wrote:
  Please find attached the patch, for adding a new option for
  pg_basebackup, to specify a different directory for pg_xlog.

 Sounds good! Here are the review comments:

 +printf(_(--xlogdir=XLOGDIR   location for the
 transaction log directory\n));

 This message is not aligned well.
Fixed.

 -if (!streamwal || strcmp(filename +
 strlen(filename) - 8, /pg_xlog) != 0)
 +if ((!streamwal  (strcmp(xlog_dir, ) == 0))
 +|| strcmp(filename + strlen(filename) -
 8, /pg_xlog) != 0)

 You need to update the source code comment.
Corrected the source code comment. Please check once.

 +#ifdef HAVE_SYMLINK
 +if (symlink(xlog_dir, linkloc) != 0)
 +{
 +fprintf(stderr, _(%s: could not create symbolic link
 \%s\: %s\n),
 +progname, linkloc, strerror(errno));
 +exit(1);
 +}
 +#else
 +fprintf(stderr, _(%s: symlinks are not supported on this
 platform 
 +  cannot use xlogdir));
 +exit(1);
 +#endif
 +}

 Is it better to call pg_free() at the end? Even if we don't do that, it
 would be almost harmless, though.
Added pg_free to free up the linkloc.

 Don't we need to prevent users from specifying the same directory in
 both --pgdata and --xlogdir?
I feel no need to prevent, even if user specifies both --pgdata and --xlogdir 
as same directory
all the transaction log files will be created in the base directory instead of 
pg_xlog directory.


Re: [HACKERS] New option for pg_basebackup, to specify a different directory for pg_xlog

2013-11-15 Thread Haribabu kommi
on 15 November 2013 17:26 Magnus Hagander wrote:
On Fri, Nov 15, 2013 at 12:10 PM, Haribabu kommi 
haribabu.ko...@huawei.commailto:haribabu.ko...@huawei.com wrote:
On 14 November 2013 23:59 Fujii Masao wrote:
 On Thu, Nov 14, 2013 at 9:08 PM, Haribabu kommi
 haribabu.ko...@huawei.commailto:haribabu.ko...@huawei.com wrote:
  Please find attached the patch, for adding a new option for
  pg_basebackup, to specify a different directory for pg_xlog.

 Sounds good! Here are the review comments:
 Don't we need to prevent users from specifying the same directory in
 both --pgdata and --xlogdir?
I feel no need to prevent, even if user specifies both --pgdata and --xlogdir 
as same directory
all the transaction log files will be created in the base directory instead 
of pg_xlog directory.

Given how easy it would be to prevent that, I think we should. It would be an 
easy misunderstanding to specify that when you actually want it in 
wherever/pg_xlog. Specifying that would be redundant in the first place, but 
people ca do that, but it
would also be very easy to do it by mistake, and you'd end up with something 
that's really bad, including a recursive symlink.

Presently with initdb also user can specify both data and xlog directories as 
same.
To prevent the data directory and xlog directory as same, there is a way in 
windows (_fullpath api) to get absolute path from a relative path, but I didn't 
get any such possibilities in linux.
I didn't find any other way to check it, if anyone have any idea regarding this 
please let me know.

I also think it would probably be worthwhile to support this in  tar format as 
well, but I guess that's a separate patch really. There's really no reason we 
should't support wal streaming into a separate tar file. But - separate issue.

Sure. I will prepare a separate patch for the same and submit it in the next 
commit fest.

Regards,
Hari babu.


<    1   2