[HACKERS] Inconsistency in postgresql.conf

2011-10-01 Thread Bruce Momjian
I notice that we use '(none)' as a default for external_pid_file, while
other default no-value settings in the file are ''.  The attached patch
changes this.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
commit b214e05827399ad320674b7d1d10979ada7f3e30
Author: Bruce Momjian br...@momjian.us
Date:   Sat Oct 1 12:00:33 2011 -0400

change 'none' default.

diff --git a/src/backend/utils/misc/postgresql.conf.sample 
b/src/backend/utils/misc/postgresql.conf.sample
new file mode 100644
index a18f14a..b2f8239
*** a/src/backend/utils/misc/postgresql.conf.sample
--- b/src/backend/utils/misc/postgresql.conf.sample
***
*** 46,52 
# (change requires restart)
  
  # If external_pid_file is not explicitly set, no extra PID file is written.
! #external_pid_file = '(none)' # write an extra PID file
# (change requires restart)
  
  
--- 46,52 
# (change requires restart)
  
  # If external_pid_file is not explicitly set, no extra PID file is written.
! #external_pid_file = ''   # write an extra PID file
# (change requires restart)
  
  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade - add config directory setting

2011-10-01 Thread Bruce Momjian
Bruce Momjian wrote:
 Tom Lane wrote:
  Bruce Momjian br...@momjian.us writes:
   Tom Lane wrote:
   Yeah.  I think the only sensible way to do this would be to provide an
   operating mode for the postgres executable that would just parse the
   config file and spit out requested values.
  
   That would certainly solve the problem, though it would have to be
   backpatched all the way back to 8.4, and it would require pg_upgrade
   users to be on newer minor versions of Postgres.
  
  I would just say no to people who expect this to work against older
  versions of Postgres.  I think it's sufficient if we get this into HEAD
  so that it will work in the future.
 
 Well, it is going to work in the future only when the _old_ version is
 9.2+.  Specifically, pg_upgrade using the flag could be patched to just
 9.2, but the flag has to be supported on old and new backends for that
 to work.

OK, I started working on #3, which was to start the servers to find the
data_directory setting, and developed the attached patch which mostly
does this.  However, I have found serious problems with pg_ctl -w/wait
mode and config-only directories (which pg_upgrade uses), and will start
a new thread to address this issue and then continue with this once that
is resolved.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
diff --git a/contrib/pg_upgrade/option.c b/contrib/pg_upgrade/option.c
new file mode 100644
index bdb7ddb..e9c5f25
*** a/contrib/pg_upgrade/option.c
--- b/contrib/pg_upgrade/option.c
*** parseCommandLine(int argc, char *argv[])
*** 112,121 
--- 112,123 
  
  			case 'd':
  old_cluster.pgdata = pg_strdup(optarg);
+ old_cluster.pgconfig = pg_strdup(optarg);
  break;
  
  			case 'D':
  new_cluster.pgdata = pg_strdup(optarg);
+ new_cluster.pgconfig = pg_strdup(optarg);
  break;
  
  			case 'g':
*** check_required_directory(char **dirpath,
*** 319,321 
--- 321,379 
  #endif
  		(*dirpath)[strlen(*dirpath) - 1] = 0;
  }
+ 
+ /*
+  * adjust_data_dir
+  *
+  * If a configuration-only directory was specified, find the real data dir
+  * by quering the running server.  This has limited checking because we
+  * can't check for a running server because we can't find postmaster.pid.
+  */
+ void
+ adjust_data_dir(ClusterInfo *cluster)
+ {
+ 	char		filename[MAXPGPATH];
+ 	FILE	   *fd;
+ 	PGconn	   *conn;
+ 	PGresult   *res;
+ 
+ 	/* If there is no postgresql.conf, it can't be a config-only dir */
+ 	snprintf(filename, sizeof(filename), %s/postgresql.conf, cluster-pgconfig);
+ 	if ((fd = fopen(filename, r)) == NULL)
+ 		return;
+ 	fclose(fd);
+ 
+ 	/* If PG_VERSION exists, it can't be a config-only dir */
+ 	snprintf(filename, sizeof(filename), %s/PG_VERSION, cluster-pgconfig);
+ 	if ((fd = fopen(filename, r)) != NULL)
+ 	{
+ 		fclose(fd);
+ 		return;
+ 	}
+ 
+ 	/* Must be a configuration directory, so query the server. */
+ 
+ 	prep_status(Finding the real data directory for the %s cluster,
+ CLUSTER_NAME(cluster));
+ 
+ 	start_postmaster(cluster);
+ 
+ 	conn = connectToServer(cluster, template1);
+ 
+ 	res = executeQueryOrDie(conn,
+ 			SELECT setting 
+ 			FROM pg_settings 
+ 			WHERE name = 'data_directory');
+ 	assert(PQntuples(res) == 1);
+ 
+ 	pg_free(cluster-pgdata);
+ 	cluster-pgdata = pg_strdup(PQgetvalue(res, 0, 0));
+ 
+ 	PQclear(res);
+ 
+ 	PQfinish(conn);
+ 
+ 	stop_postmaster(false);
+ 
+ 	check_ok();
+ }
diff --git a/contrib/pg_upgrade/pg_upgrade.c b/contrib/pg_upgrade/pg_upgrade.c
new file mode 100644
index 0568aca..273561e
*** a/contrib/pg_upgrade/pg_upgrade.c
--- b/contrib/pg_upgrade/pg_upgrade.c
*** main(int argc, char **argv)
*** 68,73 
--- 68,76 
  
  	parseCommandLine(argc, argv);
  
+ 	adjust_data_dir(old_cluster);
+ 	adjust_data_dir(new_cluster);
+ 
  	output_check_banner(live_check);
  
  	setup(argv[0], live_check);
diff --git a/contrib/pg_upgrade/pg_upgrade.h b/contrib/pg_upgrade/pg_upgrade.h
new file mode 100644
index 46aed74..0fb16ed
*** a/contrib/pg_upgrade/pg_upgrade.h
--- b/contrib/pg_upgrade/pg_upgrade.h
*** typedef struct
*** 187,192 
--- 187,193 
  	ControlData controldata;	/* pg_control information */
  	DbInfoArr	dbarr;			/* dbinfos array */
  	char	   *pgdata;			/* pathname for cluster's $PGDATA directory */
+ 	char	   *pgconfig;		/* pathname for cluster's config file directory */
  	char	   *bindir;			/* pathname for cluster's executable directory */
  	unsigned short port;		/* port number where postmaster is waiting */
  	uint32		major_version;	/* PG_VERSION of cluster */
*** void print_maps(FileNameMap *maps, int n
*** 361,366 
--- 362,368 
  /* option.c */
  
  void		parseCommandLine(int argc, char *argv[]);
+ void		adjust_data_dir(ClusterInfo *cluster);
  
  /* relfilenode.c */
  
diff --git 

Re: [HACKERS] [REVIEW] Generate column names for subquery expressions

2011-10-01 Thread Tom Lane
Marti Raudsepp ma...@juffo.org writes:
 On Wed, Sep 14, 2011 at 05:26, Kyotaro HORIGUCHI
 horiguchi.kyot...@oss.ntt.co.jp wrote:
 This is a review for the patch `Generate column names for
 subquery expressions'
 (https://commitfest.postgresql.org/action/patch_view?id=632)

 Thanks for the review. :)

Applied with minor adjustments.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Bug with pg_ctl -w/wait and config-only directories

2011-10-01 Thread Bruce Momjian
In researching pg_ctl -w/wait mode for pg_upgrade, I found that pg_ctl
-w's handling of configuration-only directories is often incorrect.  For
example, 'pg_ctl -w stop' checks for the postmaster.pid file to
determine when the server is shut down, but there is no postmaster.pid
file in the config directory, so it fails, i.e. does nothing.  What is
interesting is that specifying the real data directory does work.  

Similarly, pg_ctl references these data directory files:

snprintf(postopts_file, MAXPGPATH, %s/postmaster.opts, pg_data);
snprintf(backup_file, MAXPGPATH, %s/backup_label, pg_data);
snprintf(recovery_file, MAXPGPATH, %s/recovery.conf, pg_data);
snprintf(promote_file, MAXPGPATH, %s/promote, pg_data);

I assume things that use these files also don't work for config-only
directories.  

You might think that you can always just specify the real data
directory, but that doesn't work if the server has to be started because
you need to point to postgresql.conf.  pg_ctl -w restart is a classic
case of something that needs both the config directory and the real data
directory.  Basically, this stuff all seems broken and needs to be fixed
or documented.

What is even worse is that pre-9.1, pg_ctl start would read ports from
the pg_ctl -o command line, but in 9.1 we changed this to force reading
the postmaster.pid file to find the port number and socket directory
location --- meaning, new in PG 9.1, 'pg_ctl -w start' doesn't work for
config-only directories either.  And, we can't easily connect to the
server to get the 'data_directory' because we need to read
postmaster.pid to get the connection settings.  :-(

I think this points to the need for a command-line tool to output the
data directory location;  I am not sure what to do about the new 9.1
breakage.

pg_upgrade can work around these issues by starting using the config
directory and stopping using the real data directory, but it cannot work
around the 9.1 pg_ctl -w start problem for config-only directories.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Single pass vacuum - take 2

2011-10-01 Thread Jim Nasby
On Sep 23, 2011, at 11:37 AM, Robert Haas wrote:
 Another thing I'm not sure whether to worry about is the question of
 where we store the vacuum generation information.  I mean, if we store
 it in pg_class, then what happens if the user does a manual update of
 pg_class just as we're updating the vacuum generation information?  We
 had better make sure that there are no cases where we can accidentally
 think that it's OK to reclaim dead line pointers that really still
 have references, or we're going to end up with some awfully
 difficult-to-find bugs...  never mind the fact the possibility of the
 user manually updating the value and hosing themselves.  Of course, we
 already have some of those issues - relfrozenxid probably has the same
 problems - and I'm not 100% sure whether this one is any worse.  It
 would be really nice to have those non-transactional tables that
 Alvaro keeps mumbling about, though, or some other way to store this
 information.

Whenever I'd doing data modeling that involves both user modified data and 
system modified data, I always try to separate the two. That way you know that 
everything in the user-modify table can be changed at any time, and you can 
also lock down the system-data table to prevent the possibility of any 
user-driven changes.

So, non-transactional tables or not, I think it would be a pretty good idea to 
build some separation into the catalog tables where there is the risk of a 
conflict between user activities and system activities. Actually, assuming that 
all catalog tables keep using the internal access methods, it might be wise to 
go as far as separate data that is maintained by separate system activities, to 
avoid conflicts between different parts of the system.
--
Jim C. Nasby, Database Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_cancel_backend by non-superuser

2011-10-01 Thread Daniel Farina
On Fri, Sep 30, 2011 at 9:30 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 ISTM it would be reasonably non-controversial to allow users to issue
 pg_cancel_backend against other sessions logged in as the same userID.
 The question is whether to go further than that, and if so how much.

In *every* case -- and there are many -- where we've had people
express pain, this would have sufficed.  Usually the problem is a
large index creation gone awry, or an automated backup process
blocking a schema change that has taken half the locks it needs, or
something like that -- all by the same role that is under control of
the folks feeling distress.  If this minimal set is uncontroversial, I
would like to see that much committed and then spend some time
hand-wringing on whether to extend it.

If one does want to extend it, I think role inheritance makes the most
sense: a child role should be able to cancel its parent role's
queries, and not vice-versa. Since one can use SET ROLE in this case
anyway to basically act on behalf on that role, I think that, too,
should be uncontroversial.

-- 
fdr

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_dump issues

2011-10-01 Thread Andrew Dunstan
While investigating a client problem I just observed that pg_dump takes 
a surprisingly large amount of time to dump a schema with a large number 
of views. The client's hardware is quite spiffy, and yet pg_dump is 
taking many minutes to dump a schema with some 35,000 views. Here's a 
simple test case:


   create schema views;
   do 'begin for i in 1 .. 1 loop execute $$create view views.v_$$
   || i ||$$ as select current_date as d, current_timestamp as ts,
   $_$a$_$::text || n as t, n from generate_series(1,5) as n$$; end
   loop; end;';


On my modest hardware this database took 4m18.864s for pg_dump to run. 
Should we be looking at replacing the retail operations which consume 
most of this time with something that runs faster?


There is also this gem of behaviour, which is where I started:

   p1p2
   begin;
   drop view foo;
  pg_dump
   commit;
  boom.

with this error:

   2011-10-01 16:38:20 EDT [27084] 30063 ERROR:  could not open
   relation with OID 133640
   2011-10-01 16:38:20 EDT [27084] 30064 STATEMENT:  SELECT
   pg_catalog.pg_get_viewdef('133640'::pg_catalog.oid) AS viewdef

Of course, this isn't caused by having a large catalog, but it's 
terrible nevertheless. I'm not sure what to do about it.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug with pg_ctl -w/wait and config-only directories

2011-10-01 Thread Mr. Aaron W. Swenson
On Sat, Oct 01, 2011 at 02:08:33PM -0400, Bruce Momjian wrote:
 In researching pg_ctl -w/wait mode for pg_upgrade, I found that pg_ctl
 -w's handling of configuration-only directories is often incorrect.  For
 example, 'pg_ctl -w stop' checks for the postmaster.pid file to
 determine when the server is shut down, but there is no postmaster.pid
 file in the config directory, so it fails, i.e. does nothing.  What is
 interesting is that specifying the real data directory does work.  
 
 Similarly, pg_ctl references these data directory files:
 
 snprintf(postopts_file, MAXPGPATH, %s/postmaster.opts, pg_data);
 snprintf(backup_file, MAXPGPATH, %s/backup_label, pg_data);
 snprintf(recovery_file, MAXPGPATH, %s/recovery.conf, pg_data);
 snprintf(promote_file, MAXPGPATH, %s/promote, pg_data);
 
 I assume things that use these files also don't work for config-only
 directories.  
 
 You might think that you can always just specify the real data
 directory, but that doesn't work if the server has to be started because
 you need to point to postgresql.conf.  pg_ctl -w restart is a classic
 case of something that needs both the config directory and the real data
 directory.  Basically, this stuff all seems broken and needs to be fixed
 or documented.
 
 What is even worse is that pre-9.1, pg_ctl start would read ports from
 the pg_ctl -o command line, but in 9.1 we changed this to force reading
 the postmaster.pid file to find the port number and socket directory
 location --- meaning, new in PG 9.1, 'pg_ctl -w start' doesn't work for
 config-only directories either.  And, we can't easily connect to the
 server to get the 'data_directory' because we need to read
 postmaster.pid to get the connection settings.  :-(
 
 I think this points to the need for a command-line tool to output the
 data directory location;  I am not sure what to do about the new 9.1
 breakage.
 
 pg_upgrade can work around these issues by starting using the config
 directory and stopping using the real data directory, but it cannot work
 around the 9.1 pg_ctl -w start problem for config-only directories.
 
 -- 
   Bruce Momjian  br...@momjian.ushttp://momjian.us
   EnterpriseDB http://enterprisedb.com
 
   + It's impossible for everything to be true. +

I went through several iterations trying to find a command that can work
the way we'd like it to. (Essentially is works the way you're describing
it should.) So, in Gentoo, for the initscript, we have this really ugly
command to start the server:

su -l postgres \
-c env PGPORT=\${PGPORT}\ ${PG_EXTRA_ENV} \
/usr/lib/postgresql-9.0/bin/pg_ctl \
start ${WAIT_FOR_START} -t ${START_TIMEOUT} -s -D ${DATA_DIR} \
-o '-D ${PGDATA} --data-directory=${DATA_DIR} \
--silent-mode=true ${PGOPTS}'

And to stop the server:

su -l postgres \
-c env PGPORT=\${PGPORT}\ ${PG_EXTRA_ENV} \
/usr/lib/postgresql-9.0/bin/pg_ctl \
stop ${WAIT_FOR_STOP} -t ${NICE_TIMEOUT} -s -D ${DATA_DIR} \
-m smart

The default values for these are:

PGPORT='5432'
PG_EXTRA_ENV=''
WAIT_FOR_START='-w'
START_TIMEOUT='60'
WAIT_FOR_STOP='-w'
NICE_TIMEOUT='60'
DATA_DIR='/var/lib/postgresql/9.0/data'
PGDATA='/etc/postgresql-9.0'
PGOPTS=''

We don't use 'pg_ctl restart', instead we stop and then start the
server. So, I don't have an answer for that. I'd imagine passing '-D
${DATA_DIR}' would do the trick there as well.

Of course, simplifying this a bit would be welcome. 

-- 
Mr. Aaron W. Swenson
Gentoo Linux Developer
Email: titanof...@gentoo.org
GnuPG FP : 2C00 7719 4F85 FB07 A49C  0E31 5713 AA03 D1BB FDA0
GnuPG ID : D1BBFDA0


pgpLbW9GnlOgM.pgp
Description: PGP signature


Re: [HACKERS] pg_dump issues

2011-10-01 Thread Joe Abbate
On 10/01/2011 05:08 PM, Andrew Dunstan wrote:
 While investigating a client problem I just observed that pg_dump takes
 a surprisingly large amount of time to dump a schema with a large number
 of views. The client's hardware is quite spiffy, and yet pg_dump is
 taking many minutes to dump a schema with some 35,000 views. Here's a
 simple test case:
 
create schema views;
do 'begin for i in 1 .. 1 loop execute $$create view views.v_$$
|| i ||$$ as select current_date as d, current_timestamp as ts,
$_$a$_$::text || n as t, n from generate_series(1,5) as n$$; end
loop; end;';
 
 
 On my modest hardware this database took 4m18.864s for pg_dump to run.
 Should we be looking at replacing the retail operations which consume
 most of this time with something that runs faster?

How modest?  Was there anything else in the database?  I tried with 9000
views (because I didn't want to bother increasing
max_locks_per_transaction) and the pg_dump in less than 10 seconds
(8.991s) redirecting (plain-text) output to a file (this is on a Core i5).

 There is also this gem of behaviour, which is where I started:
 
p1p2
begin;
drop view foo;
   pg_dump
commit;
   boom.
 
 with this error:
 
2011-10-01 16:38:20 EDT [27084] 30063 ERROR:  could not open
relation with OID 133640
2011-10-01 16:38:20 EDT [27084] 30064 STATEMENT:  SELECT
pg_catalog.pg_get_viewdef('133640'::pg_catalog.oid) AS viewdef
 
 Of course, this isn't caused by having a large catalog, but it's
 terrible nevertheless. I'm not sure what to do about it.

Couldn't you run pg_dump with --lock-wait-timeout?

Joe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug with pg_ctl -w/wait and config-only directories

2011-10-01 Thread Bruce Momjian
Mr. Aaron W. Swenson wrote:
 I went through several iterations trying to find a command that can work
 the way we'd like it to. (Essentially is works the way you're describing
 it should.) So, in Gentoo, for the initscript, we have this really ugly
 command to start the server:
 
 su -l postgres \
 -c env PGPORT=\${PGPORT}\ ${PG_EXTRA_ENV} \
 /usr/lib/postgresql-9.0/bin/pg_ctl \
 start ${WAIT_FOR_START} -t ${START_TIMEOUT} -s -D ${DATA_DIR} \
 -o '-D ${PGDATA} --data-directory=${DATA_DIR} \
 --silent-mode=true ${PGOPTS}'
 
 And to stop the server:
 
 su -l postgres \
 -c env PGPORT=\${PGPORT}\ ${PG_EXTRA_ENV} \
 /usr/lib/postgresql-9.0/bin/pg_ctl \
 stop ${WAIT_FOR_STOP} -t ${NICE_TIMEOUT} -s -D ${DATA_DIR} \
 -m smart
 
 The default values for these are:
 
 PGPORT='5432'
 PG_EXTRA_ENV=''
 WAIT_FOR_START='-w'
 START_TIMEOUT='60'
 WAIT_FOR_STOP='-w'
 NICE_TIMEOUT='60'
 DATA_DIR='/var/lib/postgresql/9.0/data'
 PGDATA='/etc/postgresql-9.0'
 PGOPTS=''
 
 We don't use 'pg_ctl restart', instead we stop and then start the
 server. So, I don't have an answer for that. I'd imagine passing '-D
 ${DATA_DIR}' would do the trick there as well.
 
 Of course, simplifying this a bit would be welcome. 

What exactly is your question?  You are not using a config-only
directory but the real data directory, so it should work fine.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_cancel_backend by non-superuser

2011-10-01 Thread Daniel Farina
On Sat, Oct 1, 2011 at 3:47 PM, Kääriäinen Anssi
anssi.kaariai...@thl.fi wrote:
 I would be a step in the right direction if the DB owner would see all queries
 to the DB in pg_stat_activity.

All, including that of the superuser? I'd like to pass on that one, please.

In general, I feel there is this problem that one cannot hand over a
non-superuser but powerful role to someone else, and allowing them to
make new roles with strictly less power than what they were granted
(the opposite of role inheritance, whereby children have as much or
more power).  Right now I get the feeling that I'd rather fix that
problem in the role system then overloading what it means to be a
database owner.  If anything, to me being a database owner means the
ability to run ALTER DATABASE, and not much else.

-- 
fdr

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_cancel_backend by non-superuser

2011-10-01 Thread Kääriäinen Anssi

In *every* case -- and there are many -- where we've had people
express pain, this would have sufficed.  Usually the problem is a
large index creation gone awry, or an automated backup process
blocking a schema change that has taken half the locks it needs, or
something like that -- all by the same role that is under control of
the folks feeling distress.  If this minimal set is uncontroversial, I
would like to see that much committed and then spend some time
hand-wringing on whether to extend it.

If one does want to extend it, I think role inheritance makes the most
sense: a child role should be able to cancel its parent role's
queries, and not vice-versa. Since one can use SET ROLE in this case
anyway to basically act on behalf on that role, I think that, too,
should be uncontroversial.


I would be a step in the right direction if the DB owner would see all queries
to the DB in pg_stat_activity.

 - Anssi
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] contrib/sepgsql regression tests are a no-go

2011-10-01 Thread Joshua Brindle

Robert Haas wrote:

On Tue, Sep 27, 2011 at 6:30 PM, Tom Lanet...@sss.pgh.pa.us  wrote:

snip


If I have to break up the recipe with annotations like run this part as
root and then these commands no longer need root, I don't think
that's going to be an improvement over either of the above.


Fair enough, I'm not going to get bent out of shape about it.  There's
some aesthetic value in the way you're proposing, and anyone who is
doing this ought to know enough to make the details of how you write
it out mostly irrelevant.



Long term a better option may be to use mocking to test policy 
enforcement without modifying the system policy.


I've used test-dept http://code.google.com/p/test-dept/ on a couple 
projects and while it is a huge pain to get up and running it is very 
nice for mocking outside code (in this case libselinux calls) and 
getting predictable output to test your functionality. It would also let 
you run the tests on a non-SELinux system.


There are other c mocking frameworks, this is just the one I have 
experience with. test-dept might not be suitable for Postgres because it 
uses arch-specific awk scripts to munge symbol tables, and only supports 
x86, x86_64 and sparc right now.



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump issues

2011-10-01 Thread Andrew Dunstan



On 10/01/2011 05:48 PM, Joe Abbate wrote:

On 10/01/2011 05:08 PM, Andrew Dunstan wrote:

While investigating a client problem I just observed that pg_dump takes
a surprisingly large amount of time to dump a schema with a large number
of views. The client's hardware is quite spiffy, and yet pg_dump is
taking many minutes to dump a schema with some 35,000 views. Here's a
simple test case:

create schema views;
do 'begin for i in 1 .. 1 loop execute $$create view views.v_$$
|| i ||$$ as select current_date as d, current_timestamp as ts,
$_$a$_$::text || n as t, n from generate_series(1,5) as n$$; end
loop; end;';


On my modest hardware this database took 4m18.864s for pg_dump to run.
Should we be looking at replacing the retail operations which consume
most of this time with something that runs faster?

How modest?  Was there anything else in the database?  I tried with 9000
views (because I didn't want to bother increasing
max_locks_per_transaction) and the pg_dump in less than 10 seconds
(8.991s) redirecting (plain-text) output to a file (this is on a Core i5).


Yeah, it must be pretty modest :-) On more powerful h/w I get the same. 
I need to dig further into why it's taking so long to dump my client's 
schema on server class hardware.



There is also this gem of behaviour, which is where I started:

p1p2
begin;
drop view foo;
   pg_dump
commit;
   boom.

with this error:

2011-10-01 16:38:20 EDT [27084] 30063 ERROR:  could not open
relation with OID 133640
2011-10-01 16:38:20 EDT [27084] 30064 STATEMENT:  SELECT
pg_catalog.pg_get_viewdef('133640'::pg_catalog.oid) AS viewdef

Of course, this isn't caused by having a large catalog, but it's
terrible nevertheless. I'm not sure what to do about it.

Couldn't you run pg_dump with --lock-wait-timeout?



How would that help? This isn't a lock failure.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_cancel_backend by non-superuser

2011-10-01 Thread Euler Taveira de Oliveira

On 01-10-2011 17:44, Daniel Farina wrote:

On Fri, Sep 30, 2011 at 9:30 PM, Tom Lanet...@sss.pgh.pa.us  wrote:

ISTM it would be reasonably non-controversial to allow users to issue
pg_cancel_backend against other sessions logged in as the same userID.
The question is whether to go further than that, and if so how much.


In *every* case -- and there are many -- where we've had people
express pain, this would have sufficed.

I see. What about passing this decision to DBA? I mean a GUC 
can_cancel_session = user, dbowner (default is '' -- only superuser). You can 
select one or both options. This GUC can only be changed by superuser.



--
   Euler Taveira de Oliveira - Timbira   http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers