Re: [HACKERS] Patch: show relation and tuple infos of a lock to acquire

2014-01-01 Thread Christian Kruse
Hi,

On 31/12/13 13:56, Peter Geoghegan wrote:
 I think that this is a worthwhile effort, but like Tom I don't think
 that it's universally true that these waiters are waiting on a tuple
 lock. Most obviously, the XactLockTableWait() call you've modified
 within nbtinsert.c is not.

This is why I don't set the tuple data in this case:

 XactLockTableWaitSetupErrorContextCallback(rel, NULL);

The second argument is the tuple argument. If it is set to NULL in the
error context callback, all output regarding tuple are suppressed.

 ISTM that you should be printing just the value and the unique index
 there, and not any information about the tuple proper.

Good point, I will have a look at this.

 For better direction about where that new
 infrastructure ought to live, you might find some useful insight from
 commit 991f3e5ab3f8196d18d5b313c81a5f744f3baaea.

Thanks for the pointer!

But, to be honest, I am still unsure where to put this. As far as I
understand this commit has substantial parts in relcache.c and
elog.c – both don't seem to be very good fitting places?

Regards,

-- 
 Christian Kruse   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services



pgpFTITwzCXuK.pgp
Description: PGP signature


Re: [HACKERS] Patch: show relation and tuple infos of a lock to acquire

2014-01-01 Thread Christian Kruse
On 31/12/13 11:36, Tom Lane wrote:
 Christian's idea of a context line seems plausible to me.  I don't
 care for this implementation too much --- a global variable?  Ick.
 Make a wrapper function for XactLockTableWait instead, please.

Point taken. You are right.

 And I'm not real sure that showing the whole tuple contents is a good
 thing; I can see people calling that a security leak, not to mention
 that the performance consequences could be dire.

What do you suggest to include? Having some information to identify
the tuple may be very helpful for debugging. This is why I did it this
way.

Regards,

-- 
 Christian Kruse   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services



pgphhZyFupMjQ.pgp
Description: PGP signature


Re: [HACKERS] Patch: show relation and tuple infos of a lock to acquire

2014-01-01 Thread Simon Riggs
On 31 December 2013 17:06, Simon Riggs si...@2ndquadrant.com wrote:
 On 31 December 2013 16:36, Tom Lane t...@sss.pgh.pa.us wrote:
 Simon Riggs si...@2ndquadrant.com writes:
 On 31 December 2013 09:12, Christian Kruse christ...@2ndquadrant.com 
 wrote:
 Output with patch:

 LOG:  process 24774 acquired ShareLock on transaction 696 after 11688.720 
 ms
 CONTEXT:  relation name: foo (OID 16385)
 tuple (ctid (0,1)): (1)

 That is useful info.

 I think the message should be changed to say this only, without a context 
 line

 LOG:  process 24774 acquired ShareLock on relation foo (OID 16385)
 tuple (0,1) after 11688.720 ms

 My reason is that pid 24774 was waiting for a *tuple lock* and it was
 eventually granted, so thats what it should say.

 No, that wasn't what it was waiting for, and lying to the user like that
 isn't going to make things better.

 Like that is ambiguous and I don't understand you or what you are
 objecting to.

 When we run SELECT ... FOR SHARE we are attempting to lock rows. Why
 is the transaction we are waiting for important when we wait to lock
 rows, but when we wait to lock relations it isn't?

My thought is that this message refers to a lock wait for a
transaction to end, which is made as part of a request to lock a row.
But it is not 100% of the lock request and a race condition exists
that means that we might need to wait multiple times in rare
circumstances.

So reporting that tuple lock has been *acquired* would be inaccurate,
since at that point it isn't true. So we need to describe the
situation better, e.g. as part of waiting for a tuple lock we waited
on a transaction. Not very snappy.

Anyway, the important thing is that we can work out the tie being
waited for. Maybe logging the PK value would be useful as well, but
not the whole row.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bug fix] connection service file doesn't take effect with ECPG apps

2014-01-01 Thread Michael Meskes
On Sun, Dec 29, 2013 at 05:35:22AM +0900, MauMau wrote:
 Your test case seems different from my original mail.  In my test

As it turned out that wasn't the problem. It was simple typo on my side. Sorry
for the hassle.

However, I'd prefer to solve the problem slightly differently by not creating
an empty host variable instead of checking for it after the fact. But I take it
you don't mind that.

Fixed in HEAD and all back branches. Thanks for the report.

Michael
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
Jabber: michael.meskes at gmail dot com
VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Fixing pg_basebackup with tablespaces found in $PGDATA

2014-01-01 Thread Dimitri Fontaine
Hi,

As much as I've seen people frown upon $subject, it still happens in the
wild, and Magnus seems to agree that the current failure mode of our
pg_basebackup tool when confronted to the situation is a bug.

So here's a fix, attached.

To reproduce, mkdir -p $PGDATA/tbs/foo then CREATE TABLESPACE there, and
then pg_basebackup your server. If doing so from the same server, as I
did, then pick the tar format, as here:

  pg_basebackup -Ft -z -c fast -v -X fetch -D /tmp/backup

Then use tar to see that the base backup contains the whole content of
your foo tablespace, and if you did create another tablespace within
$PGDATA/pg_tblspc (which is the other common way to trigger that issue)
then add it to what you want to see:

  tar tzvf /tmp/backup/base.tar.gz pg_tblspc tbs/foo pg_tblspc/bar 

Note that empty directories are expected, so tar should output their
entries. Those directories are where you need to be restoring the
tablespace tarballs.

When using pg_basebackup in plain mode, the error is that you get a copy
of all your tablespaces first, then the main PGDATA is copied over and
as the destination directories already do exists (and not empty) the
whole backup fails there.

The bug should be fixed against all revisions of pg_basebackup, though I
didn't try to apply this very patch on all target branches.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

*** a/src/backend/replication/basebackup.c
--- b/src/backend/replication/basebackup.c
***
*** 45,51  typedef struct
  } basebackup_options;
  
  
! static int64 sendDir(char *path, int basepathlen, bool sizeonly);
  static int64 sendTablespace(char *path, bool sizeonly);
  static bool sendFile(char *readfilename, char *tarfilename,
  		 struct stat * statbuf, bool missing_ok);
--- 45,52 
  } basebackup_options;
  
  
! static int64 sendDir(char *path, int basepathlen, int rootpathlen,
! 	 bool sizeonly, List *tablespaces);
  static int64 sendTablespace(char *path, bool sizeonly);
  static bool sendFile(char *readfilename, char *tarfilename,
  		 struct stat * statbuf, bool missing_ok);
***
*** 100,105  perform_base_backup(basebackup_options *opt, DIR *tblspcdir)
--- 101,114 
  	XLogRecPtr	endptr;
  	TimeLineID	endtli;
  	char	   *labelfile;
+ 	char	cwd[MAXPGPATH];
+ 	int rootpathlen;
+ 
+ 	/* we need to compute rootpathlen to allow for skipping tablespaces
+ 	 * installed within PGDATA
+ 	 */
+ 	getcwd(cwd, MAXPGPATH);
+ 	rootpathlen = strlen(cwd) + 1;
  
  	backup_started_in_recovery = RecoveryInProgress();
  
***
*** 165,171  perform_base_backup(basebackup_options *opt, DIR *tblspcdir)
  
  		/* Add a node for the base directory at the end */
  		ti = palloc0(sizeof(tablespaceinfo));
! 		ti-size = opt-progress ? sendDir(., 1, true) : -1;
  		tablespaces = lappend(tablespaces, ti);
  
  		/* Send tablespace header */
--- 174,181 
  
  		/* Add a node for the base directory at the end */
  		ti = palloc0(sizeof(tablespaceinfo));
! 		ti-size = opt-progress ?
! 			sendDir(., 1, rootpathlen, true, tablespaces) : -1;
  		tablespaces = lappend(tablespaces, ti);
  
  		/* Send tablespace header */
***
*** 191,197  perform_base_backup(basebackup_options *opt, DIR *tblspcdir)
  sendFileWithContent(BACKUP_LABEL_FILE, labelfile);
  
  /* ... then the bulk of the files ... */
! sendDir(., 1, false);
  
  /* ... and pg_control after everything else. */
  if (lstat(XLOG_CONTROL_FILE, statbuf) != 0)
--- 201,207 
  sendFileWithContent(BACKUP_LABEL_FILE, labelfile);
  
  /* ... then the bulk of the files ... */
! sendDir(., 1, rootpathlen, false, tablespaces);
  
  /* ... and pg_control after everything else. */
  if (lstat(XLOG_CONTROL_FILE, statbuf) != 0)
***
*** 779,785  sendTablespace(char *path, bool sizeonly)
  	size = 512;	/* Size of the header just added */
  
  	/* Send all the files in the tablespace version directory */
! 	size += sendDir(pathbuf, strlen(path), sizeonly);
  
  	return size;
  }
--- 789,795 
  	size = 512;	/* Size of the header just added */
  
  	/* Send all the files in the tablespace version directory */
! 	size += sendDir(pathbuf, strlen(path), 0, sizeonly, NIL);
  
  	return size;
  }
***
*** 788,796  sendTablespace(char *path, bool sizeonly)
   * Include all files from the given directory in the output tar stream. If
   * 'sizeonly' is true, we just calculate a total length and return it, without
   * actually sending anything.
   */
  static int64
! sendDir(char *path, int basepathlen, bool sizeonly)
  {
  	DIR		   *dir;
  	struct dirent *de;
--- 798,810 
   * Include all files from the given directory in the output tar stream. If
   * 'sizeonly' is true, we just calculate a total length and return it, without
   * actually sending anything.
+  *
+  * Omit any directory listed in tablepaces, so as to 

Re: [HACKERS] Logging WAL when updating hintbit

2014-01-01 Thread Robert Haas
On Tue, Dec 24, 2013 at 7:31 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
 Yep, finding all the code paths with a single keyword is usually a
 good thing. Attached is a purely-aesthetical patch that updates the
 internal variable name to wal_log_hints.

 There are many GUC parameters other than wal_log_hints, that their
 names are not the same as the names of their variables. We should
 update also them?
 IMO, this looks hard to accept as some existing extensions would break
 (even if fix would be trivial) and it would make back-patching more
 difficult. wal_log_hints is a new parameter though...

That's pretty much how I feel about it as well.  It probably wouldn't
hurt very much to go and rename things like Log_disconnections to
log_disconnections, but changing NBuffers to shared_buffers would
doubtless annoy a lot of people.  Rather than argue about it, I
suppose we might as well leave the old ones alone.

But keeping the names consistent for new parameters seems simple
enough, so I've committed your patch.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logging WAL when updating hintbit

2014-01-01 Thread Michael Paquier
On Thu, Jan 2, 2014 at 10:21 AM, Robert Haas robertmh...@gmail.com wrote:
 On Tue, Dec 24, 2013 at 7:31 PM, Michael Paquier
 michael.paqu...@gmail.com wrote:
 Yep, finding all the code paths with a single keyword is usually a
 good thing. Attached is a purely-aesthetical patch that updates the
 internal variable name to wal_log_hints.

 There are many GUC parameters other than wal_log_hints, that their
 names are not the same as the names of their variables. We should
 update also them?
 IMO, this looks hard to accept as some existing extensions would break
 (even if fix would be trivial) and it would make back-patching more
 difficult. wal_log_hints is a new parameter though...

 That's pretty much how I feel about it as well.  It probably wouldn't
 hurt very much to go and rename things like Log_disconnections to
 log_disconnections, but changing NBuffers to shared_buffers would
 doubtless annoy a lot of people.  Rather than argue about it, I
 suppose we might as well leave the old ones alone.

 But keeping the names consistent for new parameters seems simple
 enough, so I've committed your patch.
Thanks.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Show lossy heap block info in EXPLAIN ANALYZE for bitmap heap scan

2014-01-01 Thread Robert Haas
On Fri, Dec 27, 2013 at 1:47 AM, Etsuro Fujita
fujita.ets...@lab.ntt.co.jp wrote:
 I wrote:
  Robert Haas wrote:
   I'd be wary of showing a desired value unless it's highly likely to
   be accurate.

  The desired value is accurately estimated based on (a) the total
  number of exact/lossy pages stored in the TIDBitmap and (b) the
  following equation in tbm_create(), except for the GIN case where
  lossy pages are added to the TIDBitmap by tbm_add_page().

 I've found there is another risk of overestimating the desired memory space
 for a BitmapAnded TIDBitmap.  I'm inclined to get rid of the estimation
 functionality from the patch completely, and leave it for future work.
 Attached is a new version of the patch, which shows only fetch block
 information and memory usage information.  I'll add this to the upcoming CF.

I spent some time looking at this tonight.  I don't think the value
that is displayed for the bitmap memory tracking will be accurate in
complex cases.  The bitmap heap scan may sit on top of one or more
bitmap-and or bitmap-or nodes.  When a bitmap-and operation happens,
one of the two bitmaps being combined will be thrown out and the
number of entries in the other map will, perhaps, be decreased.  The
peak memory usage for the surviving bitmap will be reflected in the
number displayed for the bitmap heap scan, but the peak memory usage
for the discarded bitmap will not.  This is wholly arbitrary because
both bitmaps existed at the same time, side by side, and which one we
keep and which one we throw out is essentially random.

I think we could report the results in a more principled way if we
reported the value for each bitmap *index* scan node rather than each
bitmap *heap* scan node.  However, I'm not sure it's really worth it.
I think what people really care about is knowing whether the bitmap
lossified or not, and generally how much got lossified.  The counts of
exact and lossy pages are sufficient for that, without anything
additional - so I'm inclined to think that the best course of action
might be to remove from the patch everything that's concerned with
trying to measure memory usage and just keep the exact/lossy page
counts.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] preserving forensic information when we freeze

2014-01-01 Thread Robert Haas
On Fri, Dec 27, 2013 at 9:24 PM, Stephen Frost sfr...@snowman.net wrote:
 I'm not sure what you mean by doesn't work, because it clearly does
 work.  I've already posted a patch.  You may find it ugly, but that's
 not the same as not working.

 I meant *practically*, it doesn't work.  By which, I mean, it sucks as
 a solution. :)

I fail to see why.  What is so ugly about this:

select x.* from pgbench_accounts a, pg_tuple_header(a.tableoid, a.ctid) x;

It may be true that that query wouldn't have worked before Tom added
LATERAL, but he did add LATERAL and we have it now and nobody's going
to care about this function on versions that haven't got LATERAL,
because it won't exist there anyway.  The whole point of adding
features like LATERAL (which doesn't even appear in that query,
interestingly enough) is that it was really annoying to use functions
like this before we had it.  But now we *do* have it, so the fact a
function like this would have been annoying to use in older releases
isn't a reason not to add it now.

I will admit that performance may be a reason.  After pgbench -i:

rhaas=# explain analyze select xmax from pgbench_accounts;
   QUERY PLAN

 Seq Scan on pgbench_accounts  (cost=0.00..2640.00 rows=10
width=4) (actual time=0.009..14.950 rows=10 loops=1)
 Total runtime: 20.354 ms
(2 rows)

rhaas=# explain analyze select (pg_tuple_header(tableoid,ctid)).xmax
from pgbench_accounts;
QUERY PLAN
--
 Seq Scan on pgbench_accounts  (cost=0.00..2890.00 rows=10
width=10) (actual time=0.023..314.783 rows=10 loops=1)
 Total runtime: 322.714 ms
(2 rows)

  Hindsight being what it is, perhaps we should have stuffed the system
  columns into a complex type instead of having individual columns, but
  I'm not sure changing that now would be worth the backwards
  compatibility break (yes, I know they're system columns, but I've seen
  more than one case of using ctid to break ties in otherwise identical
  rows..).

 Well, if the consensus is in favor of adding more system columns,
 that's not my first choice, but I'm OK with it.  However, I wonder how
 we plan to name them.  If we add pg_infomask and pg_infomask2, it
 won't be consistent with the existing naming convention which doesn't
 include any kind of pg-prefix, but if we don't use such a prefix then
 every column we add further pollutes the namespace.

 Yeah, I agree that it gets a bit ugly...  What would you think of doing
 *both*?  Keep the existing system columns for backwards compatibility,
 but then also have a complex 'pg_header' type which provides all of the
 existing columns, as well as infomask  infomask2 ...?

I don't really see what that accomplishes, TBH.  If we further modify
the header format in the future, we'll need to modify the definition
of that type, and that will be a backward-compatibility break for
anyone using it.  Adding new system columns is probably less
intrusive.

Maybe it's best to just add pg_infomask and pg_infomask2 as system
columns and not worry about the inconsistency with the naming
convention for existing columns.

Other opinions?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bogus error handling in pg_upgrade

2014-01-01 Thread Robert Haas
On Sun, Dec 29, 2013 at 12:25 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 check_ok() is particularly badly named, since it contains not one iota
 of error checking.  misleadingly_claim_ok() would be a better name.

That's pretty hilarious, actually.  I think it probably started as a
copy of initdb.c's check_ok(), and then at some point along the line
it got its heart ripped out.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Planning time in explain/explain analyze

2014-01-01 Thread Robert Haas
On Sat, Dec 28, 2013 at 11:32 PM, Andreas Karlsson andr...@proxel.se wrote:
 New version of the patch with updated documentation and which does not
 display the planning time when the COSTS are off.

 I will add it to the next commitfest.

I'm wondering whether the time should be stored inside the PlannedStmt
node instead of passing it around separately. One possible problem
with the way you've done things here is that, in the case of a
prepared statement, EXPLAIN ANALYZE will emit the time needed to call
GetCachedPlan(), even if that function didn't do any replanning.  Now
you could argue that this is the correct behavior, but I think there's
a decent argument that what we ought to show there is the amount of
time that was required to create the plan that we're displaying at the
time it was created, rather than the amount of time that was required
to figure out that we didn't need to replan.

A minor side benefit of this approach is that you wouldn't need to
change the signature for ExplainOnePlan(), which would avoid breaking
extensions that may call it.

Also, I am inclined to think we ought to update the examples, rather
than say this:

+rewriting and parsing. We will not include this line in later examples in
+this section.
+   /para

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] more psprintf() use

2014-01-01 Thread Peter Eisentraut
Here is a more or less straightforward patch to add more use of
psprintf() in place of hardcoded palloc(N) + sprintf() and the like.

From 4a476ed7a37222d87fed068972c88ed884cf25c3 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut pete...@gmx.net
Date: Wed, 1 Jan 2014 22:09:21 -0500
Subject: [PATCH] Add more use of psprintf()

---
 contrib/dblink/dblink.c|  5 +--
 contrib/hstore/hstore_io.c |  6 +--
 contrib/pageinspect/btreefuncs.c   | 73 +++---
 contrib/pgstattuple/pgstatindex.c  | 34 ++--
 doc/src/sgml/xtypes.sgml   |  3 +-
 src/backend/access/transam/multixact.c |  3 +-
 src/backend/commands/define.c  |  7 +---
 src/backend/optimizer/plan/subselect.c |  8 +---
 src/backend/parser/parse_type.c|  3 +-
 src/backend/storage/smgr/md.c  |  4 +-
 src/backend/utils/adt/date.c   |  6 +--
 src/backend/utils/adt/timestamp.c  |  7 +---
 src/backend/utils/fmgr/funcapi.c   |  3 +-
 src/test/regress/regress.c |  6 +--
 src/tutorial/complex.c |  3 +-
 15 files changed, 52 insertions(+), 119 deletions(-)

diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index a91a547..5fd1dd8 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1563,10 +1563,7 @@ static bool is_valid_dblink_option(const PQconninfoOption *options,
 		Datum		result;
 
 		values = (char **) palloc(2 * sizeof(char *));
-		values[0] = (char *) palloc(12);		/* sign, 10 digits, '\0' */
-
-		sprintf(values[0], %d, call_cntr + 1);
-
+		values[0] = psprintf(%d, call_cntr + 1);
 		values[1] = results[call_cntr];
 
 		/* build the tuple */
diff --git a/contrib/hstore/hstore_io.c b/contrib/hstore/hstore_io.c
index 772a5ca..8331a56 100644
--- a/contrib/hstore/hstore_io.c
+++ b/contrib/hstore/hstore_io.c
@@ -1114,11 +1114,7 @@
 	HEntry	   *entries = ARRPTR(in);
 
 	if (count == 0)
-	{
-		out = palloc(1);
-		*out = '\0';
-		PG_RETURN_CSTRING(out);
-	}
+		PG_RETURN_CSTRING();
 
 	buflen = 0;
 
diff --git a/contrib/pageinspect/btreefuncs.c b/contrib/pageinspect/btreefuncs.c
index bc34af9..e3f3c28 100644
--- a/contrib/pageinspect/btreefuncs.c
+++ b/contrib/pageinspect/btreefuncs.c
@@ -220,31 +220,17 @@
 		elog(ERROR, return type must be a row type);
 
 	j = 0;
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, %d, stat.blkno);
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, %c, stat.type);
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, %d, stat.live_items);
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, %d, stat.dead_items);
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, %d, stat.avg_item_size);
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, %d, stat.page_size);
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, %d, stat.free_size);
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, %d, stat.btpo_prev);
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, %d, stat.btpo_next);
-	values[j] = palloc(32);
-	if (stat.type == 'd')
-		snprintf(values[j++], 32, %d, stat.btpo.xact);
-	else
-		snprintf(values[j++], 32, %d, stat.btpo.level);
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, %d, stat.btpo_flags);
+	values[j++] = psprintf(%d, stat.blkno);
+	values[j++] = psprintf(%c, stat.type);
+	values[j++] = psprintf(%d, stat.live_items);
+	values[j++] = psprintf(%d, stat.dead_items);
+	values[j++] = psprintf(%d, stat.avg_item_size);
+	values[j++] = psprintf(%d, stat.page_size);
+	values[j++] = psprintf(%d, stat.free_size);
+	values[j++] = psprintf(%d, stat.btpo_prev);
+	values[j++] = psprintf(%d, stat.btpo_next);
+	values[j++] = psprintf(%d, (stat.type == 'd') ? stat.btpo.xact : stat.btpo.level);
+	values[j++] = psprintf(%d, stat.btpo_flags);
 
 	tuple = BuildTupleFromCStrings(TupleDescGetAttInMetadata(tupleDesc),
    values);
@@ -380,18 +366,13 @@ struct user_args
 		itup = (IndexTuple) PageGetItem(uargs-page, id);
 
 		j = 0;
-		values[j] = palloc(32);
-		snprintf(values[j++], 32, %d, uargs-offset);
-		values[j] = palloc(32);
-		snprintf(values[j++], 32, (%u,%u),
- BlockIdGetBlockNumber((itup-t_tid.ip_blkid)),
- itup-t_tid.ip_posid);
-		values[j] = palloc(32);
-		snprintf(values[j++], 32, %d, (int) IndexTupleSize(itup));
-		values[j] = palloc(32);
-		snprintf(values[j++], 32, %c, IndexTupleHasNulls(itup) ? 't' : 'f');
-		values[j] = palloc(32);
-		snprintf(values[j++], 32, %c, IndexTupleHasVarwidths(itup) ? 't' : 'f');
+		values[j++] = psprintf(%d, uargs-offset);
+		values[j++] = psprintf((%u,%u),
+			   BlockIdGetBlockNumber((itup-t_tid.ip_blkid)),
+			   itup-t_tid.ip_posid);
+		values[j++] = psprintf(%d, (int) IndexTupleSize(itup));
+		values[j++] = psprintf(%c, IndexTupleHasNulls(itup) ? 't' : 'f');
+		values[j++] = psprintf(%c, IndexTupleHasVarwidths(itup) ? 't' : 'f');
 
 		ptr = (char *) itup + IndexInfoFindDataOffset(itup-t_info);
 		dlen = IndexTupleSize(itup) - IndexInfoFindDataOffset(itup-t_info);
@@ -477,18 

Re: [HACKERS] more psprintf() use

2014-01-01 Thread Robert Haas
On Wed, Jan 1, 2014 at 10:14 PM, Peter Eisentraut pete...@gmx.net wrote:
 Here is a more or less straightforward patch to add more use of
 psprintf() in place of hardcoded palloc(N) + sprintf() and the like.

Looks nifty.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] preserving forensic information when we freeze

2014-01-01 Thread Greg Stark
 I fail to see why.  What is so ugly about this:

 select x.* from pgbench_accounts a, pg_tuple_header(a.tableoid, a.ctid) x;

Two points:

1) it's a bit weird to go to this effort to eliminate system columns by
using a scheme that depends on having a system column -- ctid

2) refetching a row could conceivably end up retrieving different data than
was present when the row was originally read. (In some cases that might
actually be the intended behaviour)

If this came up earlier I'm sorry but I suppose it's too hard to have a
function like foo(tab.*) which magically can tell that the record is a heap
tuple and look in the header? And presumably throw an error if passed a non
heap tuple.

-- 
greg
On 1 Jan 2014 21:34, Robert Haas robertmh...@gmail.com wrote:

 On Fri, Dec 27, 2013 at 9:24 PM, Stephen Frost sfr...@snowman.net wrote:
  I'm not sure what you mean by doesn't work, because it clearly does
  work.  I've already posted a patch.  You may find it ugly, but that's
  not the same as not working.
 
  I meant *practically*, it doesn't work.  By which, I mean, it sucks as
  a solution. :)

 I fail to see why.  What is so ugly about this:

 select x.* from pgbench_accounts a, pg_tuple_header(a.tableoid, a.ctid) x;

 It may be true that that query wouldn't have worked before Tom added
 LATERAL, but he did add LATERAL and we have it now and nobody's going
 to care about this function on versions that haven't got LATERAL,
 because it won't exist there anyway.  The whole point of adding
 features like LATERAL (which doesn't even appear in that query,
 interestingly enough) is that it was really annoying to use functions
 like this before we had it.  But now we *do* have it, so the fact a
 function like this would have been annoying to use in older releases
 isn't a reason not to add it now.

 I will admit that performance may be a reason.  After pgbench -i:

 rhaas=# explain analyze select xmax from pgbench_accounts;
QUERY PLAN

 
  Seq Scan on pgbench_accounts  (cost=0.00..2640.00 rows=10
 width=4) (actual time=0.009..14.950 rows=10 loops=1)
  Total runtime: 20.354 ms
 (2 rows)

 rhaas=# explain analyze select (pg_tuple_header(tableoid,ctid)).xmax
 from pgbench_accounts;
 QUERY PLAN

 --
  Seq Scan on pgbench_accounts  (cost=0.00..2890.00 rows=10
 width=10) (actual time=0.023..314.783 rows=10 loops=1)
  Total runtime: 322.714 ms
 (2 rows)

   Hindsight being what it is, perhaps we should have stuffed the system
   columns into a complex type instead of having individual columns, but
   I'm not sure changing that now would be worth the backwards
   compatibility break (yes, I know they're system columns, but I've seen
   more than one case of using ctid to break ties in otherwise identical
   rows..).
 
  Well, if the consensus is in favor of adding more system columns,
  that's not my first choice, but I'm OK with it.  However, I wonder how
  we plan to name them.  If we add pg_infomask and pg_infomask2, it
  won't be consistent with the existing naming convention which doesn't
  include any kind of pg-prefix, but if we don't use such a prefix then
  every column we add further pollutes the namespace.
 
  Yeah, I agree that it gets a bit ugly...  What would you think of doing
  *both*?  Keep the existing system columns for backwards compatibility,
  but then also have a complex 'pg_header' type which provides all of the
  existing columns, as well as infomask  infomask2 ...?

 I don't really see what that accomplishes, TBH.  If we further modify
 the header format in the future, we'll need to modify the definition
 of that type, and that will be a backward-compatibility break for
 anyone using it.  Adding new system columns is probably less
 intrusive.

 Maybe it's best to just add pg_infomask and pg_infomask2 as system
 columns and not worry about the inconsistency with the naming
 convention for existing columns.

 Other opinions?

 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] more psprintf() use

2014-01-01 Thread Heikki Linnakangas

On 01/02/2014 05:14 AM, Peter Eisentraut wrote:

diff --git a/contrib/hstore/hstore_io.c b/contrib/hstore/hstore_io.c
index 772a5ca..8331a56 100644
--- a/contrib/hstore/hstore_io.c
+++ b/contrib/hstore/hstore_io.c
@@ -1114,11 +1114,7 @@
HEntry *entries = ARRPTR(in);

if (count == 0)
-   {
-   out = palloc(1);
-   *out = '\0';
-   PG_RETURN_CSTRING(out);
-   }
+   PG_RETURN_CSTRING();

buflen = 0;


Is it legal to return a constant with PG_RETURN_CSTRING? Grepping 
around, I don't see that being done anywhere else, but there are places 
that do PG_RETURN_CSTRING(pstrdup(constant))...


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers