Re: [HACKERS] jsonb status

2014-03-16 Thread Erik Rijkers
On Sun, March 16, 2014 09:10, Peter Geoghegan wrote:

 [ jsonb-11.patch.gz ]

This doesn't quite compile:


[...]
patching file src/include/catalog/pg_amop.h
patching file src/include/catalog/pg_amproc.h
Hunk #3 FAILED at 358.
1 out of 3 hunks FAILED -- saving rejects to file 
src/include/catalog/pg_amproc.h.rej
patching file src/include/catalog/pg_cast.h
patching file src/include/catalog/pg_opclass.h
[...]



 cat src/include/catalog/pg_amproc.h.rej
*** src/include/catalog/pg_amproc.h
--- src/include/catalog/pg_amproc.h
*** DATA(insert (   3659   3614 3614 2 3656 ))
*** 358,364 
  DATA(insert ( 3659   3614 3614 3 3657 ));
  DATA(insert ( 3659   3614 3614 4 3658 ));
  DATA(insert ( 3659   3614 3614 5 2700 ));
!

  /* sp-gist */
  DATA(insert ( 3474   3831 3831 1 3469 ));
--- 360,373 
  DATA(insert ( 3659   3614 3614 3 3657 ));
  DATA(insert ( 3659   3614 3614 4 3658 ));
  DATA(insert ( 3659   3614 3614 5 2700 ));
! DATA(insert ( 4036   3802 3802 1 3480 ));
! DATA(insert ( 4036   3802 3802 2 3482 ));
! DATA(insert ( 4036   3802 3802 3 3483 ));
! DATA(insert ( 4036   3802 3802 4 3484 ));
! DATA(insert ( 4037   3802 3802 1 351 ));
! DATA(insert ( 4037   3802 3802 2 3485 ));
! DATA(insert ( 4037   3802 3802 3 3486 ));
! DATA(insert ( 4037   3802 3802 4 3487 ));

  /* sp-gist */
  DATA(insert ( 3474   3831 3831 1 3469 ));





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [RFC] What should we do for reliable WAL archiving?

2014-03-16 Thread MauMau

Hello,

The PostgreSQL documentation describes cp (on UNIX/Linux) or copy (on 
Windows) as an example for archive_command.  However, cp/copy does not sync 
the copied data to disk.  As a result, the completed WAL segments would be 
lost in the following sequence:


1. A WAL segment fills up.

2. The archiver process archives the just filled WAL segment using 
archive_command.  That is, cp/copy reads the WAL segment file from pg_xlog/ 
and writes to the archive area.  At this point, the WAL file is not 
persisted to the archive area yet, because cp/copy doesn't sync the writes.


3. The checkpoint processing removes the WAL segment file from pg_xlog/.

4. The OS crashes.  The filled WAL segment doesn't exist anywhere any more.

Considering the reliable image of PostgreSQL and widespread use in 
enterprise systems, I think something should be done.  Could you give me 
your opinions on the right direction?  Although the doc certainly escapes by 
saying (This is an example, not a recommendation, and might not work on all 
platforms.), it seems from pgsql-xxx MLs that many people are following 
this example.


* Improve the example in the documentation.
But what command can we use to reliably sync just one file?

* Provide some command, say pg_copy, which copies a file synchronously by 
using fsync(), and describes in the doc something like for simple use 
cases, you can use pg_copy as the standard reliable copy command.


Related to this topic, pg_basebackup doesn't fsync the backed up files.  I'm 
afraid this too is different from what the users expect --- I guess they 
would expect the backup is certainly available after pg_basebackup completes 
even if the machine crashes.


Regards
MauMau



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bug fix] postgres.exe fails to start on Windows Server 2012 due to ASLR

2014-03-16 Thread Magnus Hagander
On Sat, Feb 22, 2014 at 5:44 AM, MauMau maumau...@gmail.com wrote:

 From: Magnus Hagander mag...@hagander.net

  Does somebody want to look at backpatching this to 9.1 and earlier, or
 should we just say that it's not fully supported on those Windows versions
 unless you apply the registry workaround?


 Please use the attached patch.  It applies cleanly to both 9.1 and 9.0.


Applied, apologies for the delay.


We don't need to consider 8.4, because ASLR became enabled by default in
 Visual Studio 2008 and 8.4 doesn't support building with 2008.


Ok, thanks for confirming!


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Upcoming back branch releases

2014-03-16 Thread Magnus Hagander
On Sun, Mar 16, 2014 at 3:20 AM, MauMau maumau...@gmail.com wrote:

 eFrom: Tom Lane t...@sss.pgh.pa.us

  After some discussion, the core committee has concluded that the
 WAL-replay bug fixed in commit 6bfa88acd3df830a5f7e8677c13512b1b50ae813
 is indeed bad enough to justify near-term update releases.  Since
 there seems no point in being slow about it, tarballs will be wrapped
 Monday (3/17) for public announcement Thursday (3/20).


 Regarding:

 Prevent intermittent could not reserve shared memory region failures on
 recent Windows versions (MauMau)

 Could you include the patch I sent for 9.0/9.1 in response to Magnus san?

 http://www.postgresql.org/message-id/9C709659FE3C4B8DA7135C6F307B83
 F5@maumau


Hi!

I just applied this patch. Apologies for the delay, and thanks for the
reminder!


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] requested shared memory size overflows size_t

2014-03-16 Thread Yuri Levinsky
 Dear Craig,
The output is:
#define SIZEOF_OFF_T 8
#define SIZEOF_VOID_P 4
#define SIZEOF_SIZE_T 4
#define SIZEOF_LONG 4

Configured with: ../configure --with-as=/usr/ccs/bin/as 
--with-ld=/usr/ccs/bin/ld --enable-shared --enable-languages=c,c++,f77
Thread model: posix
gcc version 3.4.6



Sincerely yours,


Yuri Levinsky, DBA
Celltick Technologies Ltd., 32 Maskit St., Herzliya 46733, Israel
Mobile: +972 54 6107703, Office: +972 9 9710239; Fax: +972 9 9710222

-Original Message-
From: Craig Ringer [mailto:cr...@2ndquadrant.com] /define
Sent: Friday, March 14, 2014 4:41 AM
To: Yuri Levinsky; Robert Haas
Cc: Heikki Linnakangas; pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] requested shared memory size overflows size_t

On 03/04/2014 10:53 PM, Yuri Levinsky wrote:
 Please advise me: I just downloaded the source and compiled it. Sun Spark 
 Solaris 9 is always 64 bit, I verified it with sys admin. He may run 32 bit 
 applications as well. Have I use some special option during compilation to 
 verify that compiled PostgreSQL is actually 64 bit app?

Many platforms include both 32-bit and 64-bit target toolchains. So you might 
be on a 64-bit platform, but that doesn't mean you aren't compiling a 32-bit 
executable.

Please run:

grep '^#define SIZEOF' config.log

and post the results.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [RFC] What should we do for reliable WAL archiving?

2014-03-16 Thread Amit Kapila
On Sun, Mar 16, 2014 at 3:53 PM, MauMau maumau...@gmail.com wrote:
 Hello,

 The PostgreSQL documentation describes cp (on UNIX/Linux) or copy (on
 Windows) as an example for archive_command.  However, cp/copy does not sync
 the copied data to disk.  As a result, the completed WAL segments would be
 lost in the following sequence:

 1. A WAL segment fills up.

 2. The archiver process archives the just filled WAL segment using
 archive_command.  That is, cp/copy reads the WAL segment file from pg_xlog/
 and writes to the archive area.  At this point, the WAL file is not
 persisted to the archive area yet, because cp/copy doesn't sync the writes.

 3. The checkpoint processing removes the WAL segment file from pg_xlog/.

 4. The OS crashes.  The filled WAL segment doesn't exist anywhere any more.

 Considering the reliable image of PostgreSQL and widespread use in
 enterprise systems, I think something should be done.  Could you give me
 your opinions on the right direction?

How about using pg_receivexlog for archiving purpose?


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb status - ‘JsonbValue’ has no member named ‘size’

2014-03-16 Thread Erik Rijkers
On Sun, March 16, 2014 09:50, Peter Geoghegan wrote:
 On Sun, Mar 16, 2014 at 1:28 AM, Erik Rijkers e...@xs4all.nl wrote:
 [ jsonb-11.patch.gz ]

 This doesn't quite compile:

 Sorry. I guess Andrew's earlier merging of master was insufficient.

 Attached revision fixes bitrot.


Patch applies, but there is still something wrong; now during the actual 
compile (of contrib):
(this is actually the same error I get when pulling straight from the git repo 
(since yesterday somewhere, I think, or
maybe a bit earlier still))

(git repo being:  on https://github.com/feodor/postgres.git, branch 
jsonb_and_hstore)



-- [2014.03.16 12:29:27 jsonb/0] make contrib
hstore_io.c: In function ‘hstore_to_jsonb’:
hstore_io.c:1398:6: error: ‘JsonbValue’ has no member named ‘size’
   key.size = sizeof(JEntry);
  ^
hstore_io.c:1402:6: error: ‘JsonbValue’ has no member named ‘size’
   key.size += key.string.len;
  ^
hstore_io.c:1408:7: error: ‘JsonbValue’ has no member named ‘size’
val.size = sizeof(JEntry);
   ^
hstore_io.c:1413:7: error: ‘JsonbValue’ has no member named ‘size’
val.size = sizeof(JEntry);
   ^
hstore_io.c:1417:7: error: ‘JsonbValue’ has no member named ‘size’
val.size += val.string.len;
   ^
hstore_io.c: In function ‘hstore_to_jsonb_loose’:
hstore_io.c:1450:6: error: ‘JsonbValue’ has no member named ‘size’
   key.size = sizeof(JEntry);
  ^
hstore_io.c:1454:6: error: ‘JsonbValue’ has no member named ‘size’
   key.size += key.string.len;
  ^
hstore_io.c:1458:6: error: ‘JsonbValue’ has no member named ‘size’
   val.size = sizeof(JEntry);
  ^
hstore_io.c:1524:8: error: ‘JsonbValue’ has no member named ‘size’
 val.size += VARSIZE_ANY(val.numeric) +sizeof(JEntry);
^
hstore_io.c:1528:8: error: ‘JsonbValue’ has no member named ‘size’
 val.size = sizeof(JEntry);
^
hstore_io.c:1532:8: error: ‘JsonbValue’ has no member named ‘size’
 val.size += val.string.len;
^
make[1]: *** [hstore_io.o] Error 1
make: *** [all-hstore-recurse] Error 2
-- make returned 2 - abort


thanks,

Erik Rijkers






-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] AUTOCOMMIT off + ON_ERROR_ROLLBACK usability

2014-03-16 Thread Greg Stark
I've been putting my money where my mouth is and running with
AUTOCOMMIT=off and ON_ERROR_ROLLBACK set which I've been recommending
for years but hadn't gotten around to switching to myself.

I think we knew the user experience wasn't perfect but it would be
nice to enumerate the problems and they don't seem insurmountable.
Some of them seem quite trivial.

1) I find it essential to add %x to the prompt or else I'm a)
continually forgetting to commit and b) continually being surprised by
being in a transaction when I didn't expect or need to be. In fact I
added it three times as '%/%R%x%x%x%# '. It would be nice to be able
to put something else other than * there though.

2) Some commands begin transactions that are a complete surprise.
CHECKPOINT in particular was a shock.

3) Plain SELECTs in read committed mode begin a transaction even
though there's no purpose served by the transaction -- there are no
snapshots pending and the locks taken don't seem relevant to
subsequent queries. I suppose it means if you select multiple times
from the same table you're guaranteed to get a consistent schema but
since DML can commit in between that doesn't seem useful. Notably \d
and the like do *not* begin a new transaction.

3) Some commands can't be run in a transaction such as VACUUM and
CREATE INDEX CONCURRENTLY and since you're often in a transaction
unexpectedly this often gets in your way.

I think the user expectation is that if after running a command the
session still only has a vxid and has no remaining snapshots (i.e.
it's not in serializable or read consistent mode) then the transaction
will just automatically commit/abort after the command. I'm not sure
if it's safe to go that far but it sure would be nice.

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [WIP] Better partial index-only scans

2014-03-16 Thread Joshua Yanovski
Proof of concept initial patch for enabling index only scans for
partial indices even when an attribute is not in the target list, as
long as it is only used in restriction clauses that can be proved by
the index predicate.  This also works for index quals, though they
still can't be used in the target list.  However, this patch may be
inefficient since it duplicates effort that is currently delayed until
after the best plan is chosen.

The patch works by basically repeating the logic from
create_indexscan_plan in createplan.c that determines which clauses
can't be discarded, instead of the current approach, which just
assumes that any attributes referenced anywhere in a restriction
clause has to be a column in the relevant index.  It should build
against master and passes make check for me.  It also includes a minor
fix in the same code in createplan.c to make sure we're explicitly
comparing an empty list to NIL, but I can take that out if that's not
considered in scope.  If this were the final patch I'd probably
coalesce the code used in both places into a single function, but
since I'm not certain that the implementation in check_index_only
won't change substantially I held off on that.

Since the original comment suggested that this was not done due to
planner performance concerns, I assume the performance of this
approach is unacceptable (though I did a few benchmarks and wasn't
able to detect a consistent difference--what would be a good test for
this?).  As such, this is intended as more of a first pass that I can
build on, but I wanted to get feedback at this stage on where we can
improve (particularly if there were already ideas on how this might be
done, as the comment hints).  Index only scans cost less than regular
index scans so I don't think we can get away with waiting until we've
chosen the best plan before we do the work described above.  That
said, as I see it performance could improve in any combination of five
ways:
* Improve the performance of determining which clauses can't be
discarded (e.g. precompute some information about equivalence classes
for index predicates, mess around with the order in which we check the
clauses to make it fail faster, switch to real union-find data
structures for equivalence classes).
* Find a cleverer way of figuring out whether a partial index can be
used than just checking which clauses can't be discarded.
* Use a simpler heuristic (that doesn't match what use to determine
which clauses can be discarded, but still matches more than we do
now).
* Take advantage of work we do here to speed things up elsewhere (e.g.
if this does get chosen as the best plan we don't need to recompute
the same information in create_indexscan_plan).
* Delay determining whether to use an index scan or index only scan
until after cost analysis somehow.  I'm not sure exactly what this
would entail.

Since this is my first real work with the codebase, I'd really
appreciate it if people could help me figure out the best approach
here (and, more importantly, if one is necessary based on benchmarks).
 And while this should go without saying, if this patch doesn't
actually work then please let me know, since all the above is based on
the assumption that what's there is enough :)

Thanks,
Joshua Yanovski
diff --git a/src/backend/optimizer/path/indxpath.c 
b/src/backend/optimizer/path/indxpath.c
new file mode 100644
index a912174..ed87261
*** a/src/backend/optimizer/path/indxpath.c
--- b/src/backend/optimizer/path/indxpath.c
***
*** 32,37 
--- 32,38 
  #include optimizer/predtest.h
  #include optimizer/restrictinfo.h
  #include optimizer/var.h
+ #include parser/parsetree.h
  #include utils/builtins.h
  #include utils/bytea.h
  #include utils/lsyscache.h
*** static PathClauseUsage *classify_index_c
*** 135,141 
  static Relids get_bitmap_tree_required_outer(Path *bitmapqual);
  static void find_indexpath_quals(Path *bitmapqual, List **quals, List 
**preds);
  static intfind_list_position(Node *node, List **nodelist);
! static bool check_index_only(RelOptInfo *rel, IndexOptInfo *index);
  static double get_loop_count(PlannerInfo *root, Relids outer_relids);
  static void match_restriction_clauses_to_index(RelOptInfo *rel,
   IndexOptInfo 
*index,
--- 136,143 
  static Relids get_bitmap_tree_required_outer(Path *bitmapqual);
  static void find_indexpath_quals(Path *bitmapqual, List **quals, List 
**preds);
  static intfind_list_position(Node *node, List **nodelist);
! static bool check_index_only(PlannerInfo *root, RelOptInfo *rel,
! IndexOptInfo *index, List *index_clauses);
  static double get_loop_count(PlannerInfo *root, Relids outer_relids);
  static void match_restriction_clauses_to_index(RelOptInfo *rel,
   IndexOptInfo 
*index,
*** build_index_paths(PlannerInfo 

Re: [HACKERS] [bug fix] PostgreSQL fails to start on Windows if it crashes after tablespace creation

2014-03-16 Thread Amit Kapila
On Wed, Jan 15, 2014 at 8:12 PM, MauMau maumau...@gmail.com wrote:
 From: Asif Naeem anaeem...@gmail.com

 As you have

 followed destroy_tablespace_directories() function, Is there any specific
 reason not to use same logic to detect type of the file/link i.e.
 (lstat(linkloc, st) == 0  S_ISDIR(st.st_mode)), It also seems have
 more appropriate error message i.e.

 Thanks for reviewing and testing the patch.  Yes, at first I did what you
 mentioned, but modified the patch according to some advice in the mail
 thread.  During redo, create_tablespace_directories() needs to handle the
 case where the $PGDATA/pg_tblspc/xxx is not a symlink but a directory even
 on UNIX/Linux.  Please see TablespaceCreateDbspace is().
 destroy_tablespace_directories() doesn't have to handle such situation.

If create_tablespace_directories() needs to handle with directory both on
Windows/Linux, then shouldn't it be a runtime check as in your first
version rather than compile time check?
Also isn't that the reason why destroy_tablespace_directories() have similar
check?

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] requested shared memory size overflows size_t

2014-03-16 Thread Craig Ringer
On 03/16/2014 06:57 PM, Yuri Levinsky wrote:
  Dear Craig,
 The output is:
 #define SIZEOF_OFF_T 8
 #define SIZEOF_VOID_P 4
 #define SIZEOF_SIZE_T 4

OK, that confirms you have done a 32-bit build.

You need to figure out how to invoke the 64-bit toolchain on your
Solaris version. You might need to alter the PATH environment variable
or set architecture-specific CFLAGS.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] create type- similar char

2014-03-16 Thread Mohsen SM
I create one new type with CREATE TYPE command.
for my type its INTERNALLENGTH is VARIABLE .
but I want to my type behavior  similar to char and when I type this query:
CREATE TABLE tbl (col1 NEWTYPE);
then when I write this query, so it get an error for it's length:
insert into tbl values('dd');
its error is :
ERROR:  value too long for type character(1)
but my NEWTYPE don't get this error and work similar to varchar.
how I can do it?
I check for char type the bpchar() function. when I work with char type and
want to insert one word longest one character so, bpchar() called.
but for my NEWTYPE don't call the newtype() function.
how can I correct this problem?


Re: [HACKERS] create type- similar char

2014-03-16 Thread Pavel Stehule
Hello

I don't know where is a problem, but I can put a two advices

a) start by extending some known extension

b) look to source code, where and why a exception was raised. Verbose mode
can help

[pavel@localhost ~]$ psql postgres
psql (9.4devel)
Type help for help.

postgres=# \set VERBOSITY verbose
postgres=# SELECT 10/0;
ERROR:  22012: division by zero
LOCATION:  int4div, int.c:719
postgres=#


Regards

Pavel


2014-03-16 14:41 GMT+01:00 Mohsen SM mohsensoodk...@gmail.com:

 I create one new type with CREATE TYPE command.
 for my type its INTERNALLENGTH is VARIABLE .
 but I want to my type behavior  similar to char and when I type this query:
 CREATE TABLE tbl (col1 NEWTYPE);
 then when I write this query, so it get an error for it's length:
 insert into tbl values('dd');
 its error is :
 ERROR:  value too long for type character(1)
 but my NEWTYPE don't get this error and work similar to varchar.
 how I can do it?
 I check for char type the bpchar() function. when I work with char type
 and want to insert one word longest one character so, bpchar() called.
 but for my NEWTYPE don't call the newtype() function.
 how can I correct this problem?




Re: [HACKERS] requested shared memory size overflows size_t

2014-03-16 Thread Yuri Levinsky
Craig,
Is this output correct now?
#define SIZEOF_OFF_T 8
#define SIZEOF_VOID_P 8
#define SIZEOF_SIZE_T 8
#define SIZEOF_LONG 8


Sincerely yours,


Yuri Levinsky, DBA
Celltick Technologies Ltd., 32 Maskit St., Herzliya 46733, Israel
Mobile: +972 54 6107703, Office: +972 9 9710239; Fax: +972 9 9710222


-Original Message-
From: Craig Ringer [mailto:cr...@2ndquadrant.com] 
Sent: Sunday, March 16, 2014 2:40 PM
To: Yuri Levinsky; Robert Haas
Cc: Heikki Linnakangas; pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] requested shared memory size overflows size_t

On 03/16/2014 06:57 PM, Yuri Levinsky wrote:
  Dear Craig,
 The output is:
 #define SIZEOF_OFF_T 8
 #define SIZEOF_VOID_P 4
 #define SIZEOF_SIZE_T 4

OK, that confirms you have done a 32-bit build.

You need to figure out how to invoke the 64-bit toolchain on your Solaris 
version. You might need to alter the PATH environment variable or set 
architecture-specific CFLAGS.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb status - 'JsonbValue' has no member named 'size'

2014-03-16 Thread Erik Rijkers
On Sun, March 16, 2014 13:23, Andrew Dunstan wrote:

 [ jsonb-12.patch ]

patch applies; compiles, and builds, but contrib installs with this error:

make[1]: *** No rule to make target `hstore--1.2.sql', needed by `installdata'. 
 Stop.
make: *** [install-hstore-recurse] Error 2

After that an instance can be started but hstore is not available:

create extension hstore;
ERROR:  could not stat file 
/home/aardvark/pg_stuff/pg_installations/pgsql.jsonb/share/extension/hstore--1.3.sql:
 No such
file or directory





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [RFC] What should we do for reliable WAL archiving?

2014-03-16 Thread MauMau

From: Amit Kapila amit.kapil...@gmail.com

How about using pg_receivexlog for archiving purpose?


pg_receivexlog is good in that it does fsync().  But it seems difficult to 
use correctly, and I'm not sure if I can catch all WAL segments without any 
loss.  pg_receivexlog must be started with postmaster and monitored with 
some measures.  This won't be very easy at least on Windows.


The pg_receivexlog reference page suggests another difficulty:

Notes
When using pg_receivexlog instead of archive_command, the server will 
continue to recycle transaction log files even if the backups are not 
properly archived, since there is no command that fails. This can be worked 
around by having an archive_command that fails when the file has not been 
properly archived yet, for example:

archive_command = 'sleep 5  test -f /mnt/server/archivedir/%f'

This suggestion is not correct, because it only checks the existence of the 
file.  What if the file size is less than 16MB?  How can we check if the 
file is completely archived?


Regards
MauMau



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] requested shared memory size overflows size_t

2014-03-16 Thread Craig Ringer

On 16 Mar 2014 22:06, Yuri Levinsky yu...@celltick.com wrote:

 Craig, 
 Is this output correct now? 
 #define SIZEOF_OFF_T 8 
 #define SIZEOF_VOID_P 8

Well, you are getting 64-bit pointers. So if you are trying to produce a 64-bit 
executable then yes, this looks appropriate.
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [RFC] What should we do for reliable WAL archiving?

2014-03-16 Thread Greg Stark
On Sun, Mar 16, 2014 at 10:23 AM, MauMau maumau...@gmail.com wrote:
 The PostgreSQL documentation describes cp (on UNIX/Linux) or copy (on
 Windows) as an example for archive_command.  However, cp/copy does not sync
 the copied data to disk

I'm actually a lot less concerned about fsyncing the backup than I am
about fsyncing the restore. The backup is just a bunch of files for
the user to make use of. They might copy them around, compress them,
move them onto tape or other storage. They need to be aware of the
persistence of whatever storage system they're putting them in.

But when they do a restore they just untar or whatever other
extraction tool and then hand those files to Postgres to maintain. I
bet the number of people who fsync or call sync the data files after
untarring their backups is vanishingly small and problems could
manifest later after Postgres has been running.

WAL-e recently changed to fsync each data file and the directories
containing them after restore. But perhaps Postgres should open and
fsync each file in the database when it starts up?

In most file systems files written to are guaranteed to be synced
within a configurable amount of time (in some systems unless the
filesystem can't keep up). So the practical risk may be small. But in
theory a database that wasn't synced when it was restored could
suddenly lose files days or months later when a crash occurs and some
data files weren't touched by the database in the intervening time.


-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Portability issues in shm_mq

2014-03-16 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Fri, Mar 14, 2014 at 4:43 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 How is that leading to a crash?  Well, this machine is 32-bit, so MAXALIGN
 is only 4.  This means it is possible for an odd-length message cum
 message length word to not exactly divide the size of the shared memory
 ring buffer, resulting in cases where an 8-byte message length word is
 wrapped around the end of the buffer.

 Argh.  I think I forced the size of the buffer to be MAXALIGN'd, but
 what it really needs is to be a multiple of the size of uint64.

After sleeping on it, I think what you're proposing here is to double down
on a wrong design decision.  ISTM you should change the message length
words to be size_t (or possibly ssize_t, if you're depending on signed
arithmetic), which would let you keep using MAXALIGN as the alignment
macro.  There is absolutely no benefit, either for performance or code
readability, in forcing 32-bit machines to use 64-bit message length
words.  Indeed, by not using the same alignment macros as everywhere else
and not being able to use %zu for debug printouts, I think the only real
effect you're producing is to make the DSM/MQ stuff more and more randomly
unlike the rest of Postgres.  Please reconsider while it's still not too
late to change those APIs.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] create type- similar char

2014-03-16 Thread Tom Lane
Mohsen SM mohsensoodk...@gmail.com writes:
 I create one new type with CREATE TYPE command.
 for my type its INTERNALLENGTH is VARIABLE .
 but I want to my type behavior  similar to char and when I type this query:
 CREATE TABLE tbl (col1 NEWTYPE);
 then when I write this query, so it get an error for it's length:
 insert into tbl values('dd');
 its error is :
 ERROR:  value too long for type character(1)
 but my NEWTYPE don't get this error and work similar to varchar.
 how I can do it?
 I check for char type the bpchar() function. when I work with char type and
 want to insert one word longest one character so, bpchar() called.
 but for my NEWTYPE don't call the newtype() function.

It sounds like you made a length coercion cast function, but didn't
actually declare a cast based on it.  See CREATE CAST.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minimum supported version of Python?

2014-03-16 Thread Tom Lane
Brendan Jurd dire...@gmail.com writes:
 On 16 March 2014 11:55, Tom Lane t...@sss.pgh.pa.us wrote:
 Our documentation claims that the minimum Python version for plpython
 is 2.3.  However, an attempt to build with that on an old Mac yielded
 a bunch of failures in the plpython_types regression test, all of the
 form
 ...
 Personally I have no desire to put any effort into fixing this, and
 thus suggest that we just change the documentation to specify that 2.5
 is the minimum Python version since 9.0.

 +1 for updating the documentation.  2.5 has been around since 2006 so
 we are offering a huge range of compatibility as it stands.  Versions
 earlier than 2.5 are probably only of interest to historians at this
 point.

Yeah, the lack of complaints from the field suggests strongly that
nobody cares about this, so I'll just go change the docs.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] First-draft release notes for next week's releases

2014-03-16 Thread Josh Berkus
On 03/15/2014 01:02 PM, Tom Lane wrote:
 First-draft release notes are committed, and should be visible at
 http://www.postgresql.org/docs/devel/static/release-9-3-4.html
 once guaibasaurus does its next buildfarm run a few minutes from
 now.  Any suggestions?

Hmmm, not sure I like this.  It's complicated without being complete,
and supplies just enough information to get someone into trouble:

Also, the error fixed in the second changelog entry below could have
caused some bloat in statistics data. Users who have done many DROP
DATABASE commands since upgrading to 9.3 may wish to manually remove
files in $PGDATA/pg_stat_tmp (or $PGDATA/pg_stat if the server is not
running) that have old modification times and do not correspond to any
database OID present in $PGDATA/base. If you do this, note that the file
db_0.stat is a valid file even though it does not correspond to any
$PGDATA/base subdirectory.

I kind of think that either we should provide complete instructions
(which would be about 3/4 of a page), or provide limited instructions
and assume the only users who will do this are ones who already
understand pg_stat (a reasonable assumption in my opinion), so my
suggestion is move the advice paragraph from E 1.1 to the individual fix
entry in E.1.2, and change it to this:


* Remove the correct per-database statistics file during DROP DATABASE
(Tomas Vondra)

This fix prevents a permanent leak of statistics file space.

Users who have done many DROP DATABASE commands in PostgreSQL 9.3 may
wish to examine their statistics directory for statistics files which do
not correspond to any existing database and delete them.  Please note
that db_0.stat is a needed statistics file.





-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [RFC] What should we do for reliable WAL archiving?

2014-03-16 Thread Andreas Karlsson

On 03/16/2014 03:23 PM, MauMau wrote:

From: Amit Kapila amit.kapil...@gmail.com

How about using pg_receivexlog for archiving purpose?


pg_receivexlog is good in that it does fsync().  But it seems difficult
to use correctly, and I'm not sure if I can catch all WAL segments
without any loss.  pg_receivexlog must be started with postmaster and
monitored with some measures.  This won't be very easy at least on Windows.


Replication slots should solve the issue of making sure to catch all of 
the WAL.


--
Andreas Karlsson


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] First-draft release notes for next week's releases

2014-03-16 Thread Greg Stark
This is not really accurate:

This error allowed multiple versions of the same row to become
visible to queries, resulting in apparent duplicates. Since the error
is in WAL replay, it would only manifest during crash recovery or on
standby servers.

I think the idea is coming from what the second sentence below is
getting at but it may be too complex to explain in a release note:

The error causes some rows to disappear from indexes resulting in
inconsistent query results on a hot standby depending on whether
indexes are used. If the standby is subsequently activated or if it
occurs during recovery after a crash or backup restore it could result
in unique constraint violations as well.

I would consider adding something like For the problem to occur a
foreign key from another table must exist and a new row must be added
to that other table around the same time (possibly in the same
transaction) as an update to the referenced row That would help
people judge whether their databases are vulnerable. If they don't
have foreign keys or if they have a coding pattern that causes this to
happen regularly then they should be able to figure that out and
possibly disable them if they can't update promptly.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Archive recovery won't be completed on some situation.

2014-03-16 Thread Kyotaro HORIGUCHI
Thank you for good suggestion.

  What the mess is once entering this situation, I could find no
  formal operation to exit from it.
 
 Though this is formal way, you can exit from that situation by
 
 (1) Remove recovery.conf and start the server with crash recovery
 (2) Execute pg_start_backup() after crash recovery ends
 (3) Copy backup_label to somewhere
 (4) Execute pg_stop_backup() and shutdown the server
 (5) Copy backup_label back to $PGDATA
 (6) Create recovery.conf and start the server with archive recovery

It will do. And pg_resetxlog was the first thing I checked out
for reseting backupStartPoint.

 What about adding new option into pg_resetxlog so that we can
 reset the pg_control's backup start location? Even after we've
 accidentally entered into the situation that you described, we can
 exit from that by resetting the backup start location in pg_control.
 Also this option seems helpful to salvage the data as a last resort
 from the corrupted backup.

It is in far better proportion than recovery.conf option:), since
it is already warned to be dangerous as its nature. Anyway I'll
make sure the situation under the trouble fist.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [RFC] What should we do for reliable WAL archiving?

2014-03-16 Thread Robert Haas
On Sun, Mar 16, 2014 at 6:23 AM, MauMau maumau...@gmail.com wrote:
 The PostgreSQL documentation describes cp (on UNIX/Linux) or copy (on
 Windows) as an example for archive_command.  However, cp/copy does not sync
 the copied data to disk.  As a result, the completed WAL segments would be
 lost in the following sequence:

 1. A WAL segment fills up.

 2. The archiver process archives the just filled WAL segment using
 archive_command.  That is, cp/copy reads the WAL segment file from pg_xlog/
 and writes to the archive area.  At this point, the WAL file is not
 persisted to the archive area yet, because cp/copy doesn't sync the writes.

 3. The checkpoint processing removes the WAL segment file from pg_xlog/.

 4. The OS crashes.  The filled WAL segment doesn't exist anywhere any more.

 Considering the reliable image of PostgreSQL and widespread use in
 enterprise systems, I think something should be done.  Could you give me
 your opinions on the right direction?  Although the doc certainly escapes by
 saying (This is an example, not a recommendation, and might not work on all
 platforms.), it seems from pgsql-xxx MLs that many people are following
 this example.

 * Improve the example in the documentation.
 But what command can we use to reliably sync just one file?

 * Provide some command, say pg_copy, which copies a file synchronously by
 using fsync(), and describes in the doc something like for simple use
 cases, you can use pg_copy as the standard reliable copy command.

+1.  This won't obviate the need for tools to manage replication, but
it would make it possible to get the simplest case right without
guessing.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Portability issues in shm_mq

2014-03-16 Thread Robert Haas
On Sun, Mar 16, 2014 at 11:26 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Fri, Mar 14, 2014 at 4:43 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 How is that leading to a crash?  Well, this machine is 32-bit, so MAXALIGN
 is only 4.  This means it is possible for an odd-length message cum
 message length word to not exactly divide the size of the shared memory
 ring buffer, resulting in cases where an 8-byte message length word is
 wrapped around the end of the buffer.

 Argh.  I think I forced the size of the buffer to be MAXALIGN'd, but
 what it really needs is to be a multiple of the size of uint64.

 After sleeping on it, I think what you're proposing here is to double down
 on a wrong design decision.  ISTM you should change the message length
 words to be size_t (or possibly ssize_t, if you're depending on signed
 arithmetic), which would let you keep using MAXALIGN as the alignment
 macro.  There is absolutely no benefit, either for performance or code
 readability, in forcing 32-bit machines to use 64-bit message length
 words.  Indeed, by not using the same alignment macros as everywhere else
 and not being able to use %zu for debug printouts, I think the only real
 effect you're producing is to make the DSM/MQ stuff more and more randomly
 unlike the rest of Postgres.  Please reconsider while it's still not too
 late to change those APIs.

Hmm.  That's not a terrible idea.  I think part of the reason I did it
this way because, although the size of an individual message can be
limited to size_t, the queue maintains a counter of the total number
of bytes ever sent and received, and that has to use 64-bit arithmetic
so it doesn't overflow (much as we do for LSNs).  It seemed simpler to
make all of the lengths uint64 rather than the lengths of individual
messages Size and the total number of bytes sent and received uint64.
However, that could probably be worked out.

But I think there's another possible problem here.  In order for reads
from the buffer not to suffer alignment problems, the chunk size for
reads and writes from the buffer needs to be MAXIMUM_ALIGNOF (or some
multiple of it).  And in order to avoid a great deal of additional and
unwarranted complexity, the size of the message word also needs to be
MAXIMUM_ALIGNOF (or some multiple of it).  So the message word can
only be of size 4 if MAXIMUM_ALIGNOF is also 4.  IOW, I think your
approach is going to run into trouble on any system where
sizeof(Size)==4 but MAXIMUM_ALIGNOF==8.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Archive recovery won't be completed on some situation.

2014-03-16 Thread Kyotaro HORIGUCHI
Hello, very sorry to have bothered you by silly question.

me It is in far better proportion than recovery.conf option:), since
me it is already warned to be dangerous as its nature. Anyway I'll
me make sure the situation under the trouble fist.

It looks exactly the 'starting up as standby of ex-master which
crashed during backup mode' case as I checked out the original
issue. I agree that no save is needed for the case since it is
simply a db corruption. Usefulness of pg_resetxlog's
resetting-backup_label-related-items feature is not clear so far,
so I don't wish it realised for this time.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minimum supported version of Python?

2014-03-16 Thread Peter Eisentraut
On Sat, 2014-03-15 at 20:55 -0400, Tom Lane wrote:
 Our documentation claims that the minimum Python version for plpython
 is 2.3.  However, an attempt to build with that on an old Mac yielded
 a bunch of failures in the plpython_types regression test,

It has frequently been the case that the last supported version does not
fully pass the regression test, because of the overhead of maintaining
variant files.  The last supported version is the one that compiles and
works.  You will note that 2.2 no longer compiles.  (It also failed the
regression tests for a while before it started not compiling.)
Typically, versions fall out of support because we add new functionality
that the old Python versions cannot support anymore.

 all of the form
 
 ! ERROR:  could not import a module for Decimal constructor
 ! DETAIL:  ImportError: No module named decimal

You can make this work by manually installing the decimal module
(because it was not part of the core in Python 2.3).  Otherwise, this
test result legitimately alerts you that some feature is not fully
working and that you need to adjust your installation.

 A quick trawl through the buildfarm revealed no evidence that we're
 actually testing anything older than Python 2.5, which is why this
 wasn't evident awhile ago.

It was known when the decimal feature was committed.  I regularly test
all supported Python versions.

 It looks like the failing test cases date back to PG 9.0.

This feature is new in 9.4, so that can't be quite right.

We have added and dropped supported Python versions in most major
releases over time, so there will likely be different results and
different considerations in each major release.

Certainly, all Python versions listed as supported in each major PG
release actually worked at the time of release.

 Personally I have no desire to put any effort into fixing this, and
 thus suggest that we just change the documentation to specify that 2.5
 is the minimum Python version since 9.0.  Does anybody else want to
 work harder on it, and if so what would you propose doing exactly?

It is debatable whether 2.3 should still be considered supported in 9.4.
If we do drop it, then the supported version should be 2.4.  Personally,
I think everything is fine as it is.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minimum supported version of Python?

2014-03-16 Thread Peter Eisentraut
On Sun, 2014-03-16 at 15:56 +1100, Brendan Jurd wrote:
 Versions
 earlier than 2.5 are probably only of interest to historians at this
 point.

and users of RHEL 5



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] BUG #9223: plperlu result memory leak

2014-03-16 Thread Alvaro Herrera
Alex Hunsaker escribió:
 On Wed, Mar 5, 2014 at 12:55 PM, Alex Hunsaker bada...@gmail.com wrote:
  On Wed, Mar 5, 2014 at 12:22 PM, Alvaro Herrera
  alvhe...@2ndquadrant.com wrote:
 
  Can I bug you into verifying what supported releases need this patch,
  and to which does it backpatch cleanly?  And if there's any to which it
  doesn't, can I further bug you into providing one that does?
 
  Sure! Not bugging at all. I'll dig into this in a few hours.
 
 This will apply cleanly all the way to REL9_2_STABLE. It applies (with
 fuzz, but cleanly to REL9_1). REL9_0 does this completely differently
 and so does not have this leak.

Excellent, thanks.  I verified all these assertions and then pushed.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Portability issues in shm_mq

2014-03-16 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 But I think there's another possible problem here.  In order for reads
 from the buffer not to suffer alignment problems, the chunk size for
 reads and writes from the buffer needs to be MAXIMUM_ALIGNOF (or some
 multiple of it).  And in order to avoid a great deal of additional and
 unwarranted complexity, the size of the message word also needs to be
 MAXIMUM_ALIGNOF (or some multiple of it).  So the message word can
 only be of size 4 if MAXIMUM_ALIGNOF is also 4.  IOW, I think your
 approach is going to run into trouble on any system where
 sizeof(Size)==4 but MAXIMUM_ALIGNOF==8.

Well, it will result in padding space when you maxalign the length word,
but I don't see why it wouldn't work; and it would certainly be no less
efficient than what's there today.

I'll be quite happy to test the results on my old HPPA box, which has
exactly those properties, if you're worried about it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minimum supported version of Python?

2014-03-16 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 On Sat, 2014-03-15 at 20:55 -0400, Tom Lane wrote:
 Our documentation claims that the minimum Python version for plpython
 is 2.3.  However, an attempt to build with that on an old Mac yielded
 a bunch of failures in the plpython_types regression test,

 Personally I have no desire to put any effort into fixing this, and
 thus suggest that we just change the documentation to specify that 2.5
 is the minimum Python version since 9.0.  Does anybody else want to
 work harder on it, and if so what would you propose doing exactly?

 It is debatable whether 2.3 should still be considered supported in 9.4.
 If we do drop it, then the supported version should be 2.4.  Personally,
 I think everything is fine as it is.

Well, if you want to consider python 2.3 as supported, I have a buildfarm
machine I am about to put online that has 2.3 on it.  If I spin it up with
python enabled, I expect you to see to it that it starts passing.  If you
won't do that, I'm going to change the documentation.

As for 2.4 vs 2.5, I don't have a lot of faith that we're really
supporting anything that's not represented in the buildfarm...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] First-draft release notes for next week's releases

2014-03-16 Thread Josh Berkus
On 03/16/2014 12:32 PM, Greg Stark wrote:
 I would consider adding something like For the problem to occur a
 foreign key from another table must exist and a new row must be added
 to that other table around the same time (possibly in the same
 transaction) as an update to the referenced row That would help
 people judge whether their databases are vulnerable. If they don't
 have foreign keys or if they have a coding pattern that causes this to
 happen regularly then they should be able to figure that out and
 possibly disable them if they can't update promptly.

I don't think that will actually help people know whether they're
vulnerable without a longer explanation.

It's starting to sound like we need a wiki page for this release?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Providing catalog view to pg_hba.conf file - Patch submission

2014-03-16 Thread Prabakaran, Vaishnavi
From: Magnus Hagander [mailto:mag...@hagander.net]
Sent: Friday, 14 March 2014 9:33 PM
To: Prabakaran, Vaishnavi
Cc: PostgreSQL-development
Subject: Re: [HACKERS] Providing catalog view to pg_hba.conf file - Patch 
submission

On Fri, Mar 14, 2014 at 6:30 AM, Prabakaran, Vaishnavi 
vaishna...@fast.au.fujitsu.commailto:vaishna...@fast.au.fujitsu.com wrote:
Hi,

In connection to my previous proposal about providing catalog view to 
pg_hba.conf file contents , I have developed the attached patch .

[Current situation]
Currently, to view the pg_hba.conf file contents, DB admin has to access the 
file from database server to read the settings.  In case of huge and multiple 
hba files, finding the appropriate hba rules which are loaded will be difficult 
and take some time.

[What this Patch does]
Functionality of the attached patch is that it will provide a new view 
pg_hba_settings to admin users. Public access to the view is restricted. This 
view will display basic information about HBA setting details of postgresql 
cluster.  Information to be shown , is taken from parsed hba lines and not 
directly read from pg_hba.conf files. Documentation files are also updated to 
include details of this new view under Chapter 47.System Catalogs. Also , a 
new note is added in chapter 19.1 The pg_hba.conf File

[Advantage]
Advantage of having this pg_hba_settings view is that the admin can check, 
what hba rules are loaded in runtime via database connection itself.  And, 
thereby it will be easy and useful for admin to check all the users with their 
privileges in a single view to manage them.


This looks like a useful feature, so make sure you register it on 
https://commitfest.postgresql.org/action/commitfest_view?id=22.

I haven't looked at the actual code yet, btu I did notice one thing at a very 
quick lookover at the docs - it seems to be completely ignoring the key/value 
parameters given on a row, and stops reporting after the auth method? That 
seems bad. And also, probably host/mask should be using the inet style 
datatypes and not text?

Agree, am now working on including a new column configuration_option to 
display the key/value parameter set.  I will send the updated patch once after 
adding new column.
Host/mask values are stored as sockaddr_storage structure in parsed_hba_lines, 
so I have used text datatype to display the hostname.

Thanks  Regards,
Vaishnavi
Fujitsu Australia
--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] [RFC] What should we do for reliable WAL archiving?

2014-03-16 Thread Amit Kapila
On Sun, Mar 16, 2014 at 7:53 PM, MauMau maumau...@gmail.com wrote:
 From: Amit Kapila amit.kapil...@gmail.com

 How about using pg_receivexlog for archiving purpose?


 pg_receivexlog is good in that it does fsync().  But it seems difficult to
 use correctly, and I'm not sure if I can catch all WAL segments without any
 loss.  pg_receivexlog must be started with postmaster and monitored with
 some measures.  This won't be very easy at least on Windows.

 The pg_receivexlog reference page suggests another difficulty:

 Notes
 When using pg_receivexlog instead of archive_command, the server will
 continue to recycle transaction log files even if the backups are not
 properly archived, since there is no command that fails. This can be worked
 around by having an archive_command that fails when the file has not been
 properly archived yet, for example:
 archive_command = 'sleep 5  test -f /mnt/server/archivedir/%f'

 This suggestion is not correct, because it only checks the existence of the
 file.  What if the file size is less than 16MB?  How can we check if the
 file is completely archived?

The most probable reasons for un-successful archiving could be:
1. Disk space got full - pg_receivexlog makes sure while open/create new
segment file that the size of new file should be 16MB (open_walfile()). So due
to this reason there should not be a problem to above command.

2. Permission got denied - I think this will lead to failure of above archive
command mentioned by you.

3. n/w connection broken - This will also lead to failure of above command, but
here I think there is a possibility that it might have checked the existence of
in-complete wal file on archive location and consider it archived, but I think
wal_keep_segments can avoid this problem.

Also if you are on 9.4, then may be --slot parameter can help you.


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] First draft of update announcement

2014-03-16 Thread Josh Berkus
... attached.  Please correct!

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
The PostgreSQL Global Development Group has released a critical update to version 9.3 of the database system. This minor release, PostgreSQL 9.3.4, fixes a data corruption issue with replication and crash recovery, as well as several other minor issues.  All users of version 9.3 are urged to update their installations at the next opportunity.

The data corruption issue in PostgreSQL 9.3 affects binary replication standbys, servers being recovered from point-in-time-recovery backup, and standalone servers which recover from a system crash. The bug causes rows to vanish from indexes during recovery due to simultaneous updates of rows on both sides of a foreign key.  This can then cause query results to be inconsistent depending on whether or not an index is used, and eventually lead to primary key violations and similar issues.  For this reason, users are encouraged to take a new base backup of each of their standy databases after applying the update.

This release resolves some other issues which affect PostgreSQL 9.3, including:

* Make sure that statistics files for dropped databases get deleted
* Fix timing consistency issue with NOTIFY
* Allow materialized views to be referenced in UPDATE and DELETE queries
* Allow regular expression execution to be cancelled
* Remove ability to execute OVERLAPs with a single argument
* Improve performance of index checks for newly added rows
* Prevent premature walsender disconnection
* Add read-only data_checksum parameter
* Prevent memory errors on newer Windows versions
* Prevent erroneous operator push-down in pgsql_fdw
* Update timezone files

Additional changes and details of some of the above issues can be found in the Release Notes.

As with other minor releases, users are not required to dump and reload their database or use pg_upgrade in order to apply this update release; you may simply shut down PostgreSQL and update its binaries.  Users who have skipped multiple update releases may need to perform additional post-update steps; see the Release Notes for details.

Links:
  * [Download](http://postgresql.org/download)
  * [Release Notes](http://www.postgresql.org/docs/current/static/release.html)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] First draft of update announcement

2014-03-16 Thread Ian Lawrence Barwick
2014-03-17 13:24 GMT+09:00 Josh Berkus j...@agliodbs.com:
 ... attached.  Please correct!

A couple of drive-by corrections:

each of their standy databases

  standy - standby

Prevent erroneous operator push-down in pgsql_fdw

  pgsql_fdw - postgres_fdw


Regards

Ian Barwick


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Fix typo in nbtree.h introduced by efada2b

2014-03-16 Thread Michael Paquier
Hi,

I found a small typo in nbtree.h, introduced by commit efada2b. Patch
is attached.
Regards,
-- 
Michael
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index 7b26f98..779a422 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -395,7 +395,7 @@ typedef struct xl_btree_unlink_page
 {
RelFileNode node;
BlockNumber deadblk;/* target block being deleted */
-   BlockNumber leftsib;/* taregt block's left sibling, if any 
*/
+   BlockNumber leftsib;/* target block's left sibling, if any 
*/
BlockNumber rightsib;   /* target block's right sibling */
 
/*

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] gaussian distribution pgbench

2014-03-16 Thread KONDO Mitsumasa

(2014/03/15 15:53), Fabien COELHO wrote:


Hello Heikki,


A couple of comments:

* There should be an explicit \setrandom ... uniform option too, even though
you get that implicitly if you don't specify the distribution


Indeed. I agree. I suggested it, but it got lost.


* What exactly does the threshold mean? The docs informally explain that the
larger the thresold, the more frequent values close to the middle of the
interval are drawn, but that's pretty vague.


There are explanations and computations as comments in the code. If it is about
the documentation, I'm not sure that a very precise mathematical definition will
help a lot of people, and might rather hinder understanding, so the doc focuses
on an intuitive explanation instead.


* Does min and max really make sense for gaussian and exponential
distributions? For gaussian, I would expect mean and standard deviation as the
parameters, not min/max/threshold.


Yes... and no:-) The aim is to draw an integer primary key from a table, so it
must be in a specified range. This is approximated by drawing a double value 
with
the expected distribution (gaussian or exponential) and project it carefully 
onto
integers. If it is out of range, there is a loop and another value is drawn. The
minimal threshold constraint (2.0) ensures that the probability of looping is 
low.


* How about setting the variable as a float instead of integer? Would seem more
natural to me. At least as an option.


Which variable? The values set by setrandom are mostly used for primary keys. We
really want integers in a range.

Oh, I see. He said about documents.

+   Moreover, set gaussian or exponential with threshold interger value,
+   we can get gaussian or exponential random in integer value between
+   replaceablemin/ and replaceablemax/ bounds inclusive.

Collectry,
+   Moreover, set gaussian or exponential with threshold double value,
+   we can get gaussian or exponential random in integer value between
+   replaceablemin/ and replaceablemax/ bounds inclusive.


And I am going to fix the document more easily understanding for user.

Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers