Re: [HACKERS] new group commit behavior not helping?

2012-04-01 Thread Jeff Janes
On Saturday, March 31, 2012, Jeff Janes jeff.ja...@gmail.com wrote:
 On Saturday, March 31, 2012, Robert Haas robertmh...@gmail.com wrote:
 On Sun, Apr 1, 2012 at 1:40 AM, Jeff Janes jeff.ja...@gmail.com wrote:

 It looks like in your case tps was still scaling with clients when you
gave
 up, so clients was probably too small.

 What is kind of weird is that it actually seems to scale at almost
 exactly half of linear.

This is expected.  A very common pattern in commits/fsync is to see
alterations between 1 and C-1,  or between 2 and C-2.

To cure that, play with commit_delay.  Don't make the mistake I did.
 Commit_delay is in micro seconds, not ms.  That didn't mater when minimum
kernel sleep was 10 or 4 ms anyway.  Now with much finer sleeps, it makes a
huge difference, so try ~5000.

Cheers

Jeff


Re: [HACKERS] measuring lwlock-related latency spikes

2012-04-01 Thread Simon Riggs
On Sun, Apr 1, 2012 at 4:05 AM, Robert Haas robertmh...@gmail.com wrote:

 If I filter for waits greater than 8s, a somewhat different picture emerges:

      2 waited at indexam.c:521 blocked by bufmgr.c:2475
    212 waited at slru.c:310 blocked by slru.c:526

 In other words, some of the waits for SLRU pages to be written are...
 really long.  There were 126 that exceeded 10 seconds and 56 that
 exceeded 12 seconds.  Painful is putting it mildly.

Interesting. The total wait contribution from those two factors
exceeds the WALInsertLock wait.

 I suppose one interesting question is to figure out if there's a way I
 can optimize the disk configuration in this machine, or the Linux I/O
 scheduler, or something, so as to reduce the amount of time it spends
 waiting for the disk.  But the other thing is why we're waiting for
 SLRU page writes to begin with.

First, we need to determine that it is the clog where this is happening.

Also, you're assuming this is an I/O issue. I think its more likely
that this is a lock starvation issue. Shared locks queue jump
continually over the exclusive lock, blocking access for long periods.

I would guess that is also the case with the index wait, where I would
guess a near-root block needs an exclusive lock, but is held up by
continual index tree descents.

My (fairly old) observation is that the shared lock semantics only
work well when exclusive locks are fairly common. When they are rare,
the semantics work against us.

We should either implement 1) non-queue jump semantics for certain
cases 2) put a limit on the number of queue jumps that can occur
before we let the next x lock proceed instead. (2) sounds better, but
keeping track might well cause greater overhead.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Command Triggers patch v18

2012-04-01 Thread Dimitri Fontaine
Robert Haas robertmh...@gmail.com writes:
 How about calling it command tag?  I think both context and toplevel
 are inconsistent with other uses of those terms: context is an
 error-reporting field, among other things; and we don't care about the
 toplevel command, just the command-tag of the one we're executing -
 e.g. if DROP fires a command trigger which invokes CREATE which fires
 another command trigger, the inner one is going to get CREATE not
 DROP.  Or at least so I presume.

It's not about that though, it's about a DROP TYPE that cascades to a
DROP FUNCTION, or a DROP SCHEMA that cascades to 10 DROP TABLE. I want
to know in each cascaded DROP TABLE that it's happening as a result of
DROP SCHEMA ... CASCADE, so I'm calling that a top-level command.

 See above example: I am pretty sure you need a stack.

In next version, certainly. As of now I'm willing to start a new stack
in each command executed in a command trigger. That means 9.2 will only
expose the first level of the stack, I guess.

Also there's a difference in CASCADE (no new command emitted) and in an
event trigger that executes a new top-level command.

 I would not want my replication system issuing cascaded drops, because
 if the sides don't match it might cascade to something on the remote
 side that it doesn't cascade to on the local side, which exceeds my
 tolerance for scary behavior.

Well it depends on what you're achieving with replication, this term
includes so many different use cases… What I want core to provide is the
mechanism that allows implementing the replication you need.

 There are far too many variants and cases of our command to be able to
 extract their parameters in a flat way (a bunch of variables compared to
 a nested description ala json or xml), and I don't think such a flat
 representation is going to be much better than the parse tree.

 I strongly disagree.  I think we'll find that with the right choice of
 hook points, the number of variables that need to be exposed is quite
 compact.  Indeed, I'd venture to say that needing to pass lots and
 lots of information is evidence that you've made a poor choice of hook
 point.

Currently we're exposing a very limited set of variables. So I think
we're good in your book.

 No, but whether or not you mention it in the CREATE TRIGGER syntax has
 nothing to do with whether it's available as a magic parameter inside
 the procedure.  Those things out to be independent.  I imagine that
 the stuff that is accessible from inside the trigger will be richer
 than what you can do in the trigger syntax itself.

Exactly, we're in agreement here.

 I'm still not sold on the idea of lumping together every command under
 a single command_start event.  I can't see anyone wanting to hook
 anything that broad.  Don't forget that we need to document not only
 which triggers will fire but also what magic variables they'll get. A

Yeah, and command start triggers are only going to have tag, toplevel
tag or whatever the right name of that is, and parse tree if it's
written in C. And that's it. The command start event trigger are
typically fired directly from utility.c.

 dcl_command_start hook could conceivably get the list of privileges
 being granted or revoked, but a general command_start trigger is going
 to need a different set of magic variables depending on the actual
 command type.  I think it might be simpler and more clear to say that
 each event type provides these variables, rather than having to
 conditionalize it based on the command type.

That's going to be true for other event timing specs, but not for the
command start as I picture it.

 See above - generally, I think that it's useful for a command trigger
 to know that it's being called because of a DDL event, rather than
 some command that could be doing anything.  Also, I think that wanting
 to hook all DDL commands is likely to be a useful thing to do, and
 without having to explicitly list 50 command names...

Yeah, just omit the WHEN clause then.

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] measuring lwlock-related latency spikes

2012-04-01 Thread Robert Haas
On Sun, Apr 1, 2012 at 7:07 AM, Simon Riggs si...@2ndquadrant.com wrote:
 First, we need to determine that it is the clog where this is happening.

I can confirm that based on the LWLockIds.  There were 32 of them
beginning at lock id 81, and a gdb session confirms that
ClogCtlData-shared-buffer_locks[0..31] point to exact that set of
LWLockIds.

 Also, you're assuming this is an I/O issue. I think its more likely
 that this is a lock starvation issue. Shared locks queue jump
 continually over the exclusive lock, blocking access for long periods.

That is a possible issue in general, but I can't see how it could be
happening here, because the shared lock is only a mechanism for
waiting for an I/O to complete.  The backend doing the I/O grabs the
control lock, sets a flag saying there's an I/O in progress, takes the
buffer lock in exclusive mode, and releases the control lock.  The
shared locks are taken when someone notices that the flag is set on a
buffer they want to access.  So there aren't any shared lockers until
the buffer is already locked in exclusive mode.  Or at least I don't
think there are; please correct me if I'm wrong.

Now... I do think it's possible that this could happen: backend #1
wants to write the buffer, so grabs the lock and writes the buffer.
Meanwhile some waiters pile up.  When the guy doing the I/O finishes,
he releases the lock, releasing all the waiters.  They then have to
wake up and grab the lock, but maybe before they (or some of them) can
do it somebody else starts another I/O on the buffer and they all have
to go back to sleep.  That could allow the wait time to be many times
the I/O time.  If that's the case we could just make this use
LWLockAcquireOrWait(); the calling code is just going to pick a new
victim buffer anyway, so it's silly to go through additional spinlock
cycles to acquire a lock we don't want anyway.

I bet I can add some more instrumentation to get clearer data on what
is happening here.  What I've added so far doesn't seem to be
affecting performance very much.

 I would guess that is also the case with the index wait, where I would
 guess a near-root block needs an exclusive lock, but is held up by
 continual index tree descents.

 My (fairly old) observation is that the shared lock semantics only
 work well when exclusive locks are fairly common. When they are rare,
 the semantics work against us.

 We should either implement 1) non-queue jump semantics for certain
 cases 2) put a limit on the number of queue jumps that can occur
 before we let the next x lock proceed instead. (2) sounds better, but
 keeping track might well cause greater overhead.

Maybe, but your point that we should characterize the behavior before
engineering solutions is well-taken, so let's do that.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Autovacuum worker does not set stack_base_ptr

2012-04-01 Thread Heikki Linnakangas
Currently, only regular backends set the stack base pointer, for the 
check_stack_depth() mechanism, in PostgresMain. We don't have stack 
overrun protection in auxiliary processes. However, autovacuum workers 
at least can run arbitrary user code, and if that overruns the stack, 
you get a segfault.


Here's a little script to reproduce that:

begin;
create table atable(id int4);
insert into atable select generate_series(1,1);

/* not recursive yet */
create or replace function recfunc(i int4) returns bool AS $$begin
  return true;
end;
$$ language plpgsql immutable;

/* Create index using the function. */
create index recindex on atable((recfunc(id)));

/* make the function recursive */
create or replace function recfunc(i int4) returns bool AS $$begin
  perform recfunc(i);
end;
$$ language plpgsql immutable;

commit;

/* Now wait for autoanalyze to kick in, and crash */

The fix is quite straightforward, we just need to set the stack base 
pointer. I think we should set it in all processes, even though most 
auxiliary processes like bgwriter can't execute arbitrary code. There's 
no harm in doing so, anyway. I'm thinking that we should set the base 
pointer in PostmasterMain(), so that it is inherited by all forked 
processes, and in SubPostmasterMain() for EXEC_BACKEND.


Proposed patch attached. The comment changes regarding PL/Java are in 
anticipation for a fix for the Itanium-issue mentioned here: 
http://lists.pgfoundry.org/pipermail/pljava-dev/2012/001906.html. 
Nothing has yet been done in PL/Java, but I am assuming it will start 
using the set/restore_stack_base() functions introduced in this patch. 
However, we might need to do something more complicated to fix the first 
PL/Java issue I explain in that email.


I suppose this needs to be backpatched all the way to 8.3.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com
commit 9d0b44fd1b8b7e872080c032ddb0cf77c1bd2c40
Author: Heikki Linnakangas heikki.linnakan...@iki.fi
Date:   Sun Apr 1 19:20:47 2012 +0300

Do stack-depth checking in all postmaster children.

We used to only initialize the stack base pointer when starting up a regular
backend, not in other processes. In particular, autovacuum workers can run
arbitrary user code, and without stack-depth checking, infinite recursion
in e.g an index expression will bring down the whole cluster.

diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 1dac695..1440f5f 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -971,6 +971,11 @@ PostmasterMain(int argc, char *argv[])
 	set_max_safe_fds();
 
 	/*
+	 * Set reference point for stack-depth checking
+	 */
+	set_stack_base();
+
+	/*
 	 * Initialize the list of active backends.
 	 */
 	BackendList = DLNewList();
@@ -3978,6 +3983,11 @@ SubPostmasterMain(int argc, char *argv[])
 	read_backend_variables(argv[2], port);
 
 	/*
+	 * Set reference point for stack-depth checking
+	 */
+	set_stack_base();
+
+	/*
 	 * Set up memory area for GSS information. Mirrors the code in ConnCreate
 	 * for the non-exec case.
 	 */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 02be363..d230118 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -115,8 +115,10 @@ int			PostAuthDelay = 0;
 static long max_stack_depth_bytes = 100 * 1024L;
 
 /*
- * Stack base pointer -- initialized by PostgresMain. This is not static
- * so that PL/Java can modify it.
+ * Stack base pointer -- initialized by PostmasterMain and inherited by
+ * subprocesses. This is not static because old versions of PL/Java modify
+ * it directly. Newer versions use set_stack_base(), but we want to stay
+ * binary-compatible for the time being.
  */
 char	   *stack_base_ptr = NULL;
 
@@ -2958,6 +2960,53 @@ ia64_get_bsp(void)
 
 
 /*
+ * set_stack_base: set up reference point for stack depth checking
+ *
+ * Returns the old reference point, if any.
+ */
+pg_stack_base_t
+set_stack_base(void)
+{
+	char		stack_base;
+	pg_stack_base_t old;
+
+#if defined(__ia64__) || defined(__ia64)
+	old.stack_base_ptr = stack_base_ptr;
+	old.register_stack_base_ptr = register_stack_base_ptr;
+#else
+	old = stack_base_ptr;
+#endif
+
+	/* Set up reference point for stack depth checking */
+	stack_base_ptr = stack_base;
+#if defined(__ia64__) || defined(__ia64)
+	register_stack_base_ptr = ia64_get_bsp();
+#endif
+
+	return old;
+}
+
+/*
+ * restore_stack_base: restore reference point for stack depth checking
+ *
+ * This can be used after set_stack_base() to restore the old value. This
+ * is currently only used in PL/Java. When PL/Java calls a backend function
+ * from different thread, the thread's stack is at a different location than
+ * the main thread's stack, so it sets the base pointer before the call, and
+ * restores it afterwards.
+ */
+void
+restore_stack_base(pg_stack_base_t base)
+{
+#if defined(__ia64__) || defined(__ia64)
+	

[HACKERS] Switching to Homebrew as recommended Mac install?

2012-04-01 Thread Jay Levitt
The Mac installation docs currently recommend the EDB one-click installer as 
the first choice. While this does install pgadmin and some other 
refinements, it also is fairly confusing to troubleshoot:


- By default, it installs to /Library/PostgreSQL, which is also (I think) 
where the Apple-supplied Lion install is


- The uninstaller is hidden in /Library/PostgreSQL, which (since Finder 
hides /Library by default) you're likely to go to via Terminal. But the 
uninstaller is a Mac app, so even if you find it you have to know to use 
open to run it, because Mac apps are really directories that the Finder 
abstracts away from you.


- The EDB docs are written against 8.4.

- There are, as @lluad points out, no fewer than eight ways to install 
Postgres on a Mac (fink, macports, homebrew, Lion default, build from 
source, EDB, and two other binary installers)


- We have few Mac experts hanging out in #postgresql.

- We just had two folks within an hour, BOTH with conflicting installs of 
Postgres.


So with all respect and thanks to EDB for maintaining those installers, I'd 
like to propose that homebrew become the recommended install method on Mac, 
and I will update the Mac formula to overcome any current objections. The 
nice thing about homebrew is that (a) formulas can contain arbitrary Ruby 
and command-line options, so we can easily deal with things like detecting 
existing installs, handling shared memory, etc. if we want to, and (b) pull 
requests are accepted freely and frequently, so it can always be the 
current, security-patched version.


What do folks think of this idea?  When I mention homebrew in #postgresql, 
there's always an ick, but I believe that's true of any package manager 
(and possibly any Mac anything, because we're all Rails-loving ORM-using 
SQL-not-understanding fanbois, and I say that with love.)


The current homebrew installer is already 9.1.3, and does a make-world, so 
you get all of contrib built.


POSSIBLE OBJECTIONS/PREREQUISITES

1. homebrew installs everything under /usr/local and makes that 
user-writeable.  Sorry.  It does because most Mac users don't know how to 
edit PATH for GUI apps (it's in a .plist in a hidden directory in your home 
dir), and /usr/local is already in PATH by default.


2. The current formula installs Postgres as the desktop user, not as the 
_postgres role account.


I'm personally of the strong opinion that user-to-user privilege escalation 
attacks are NOT an issue on desktops; all important files are already owned 
by the desktop user. The attack vector is *maybe* root escalation attacks, 
but if you want root, it's so common for installers to ask permission that 
your malware could just ask.  The real attack vector is I'm in your 
browser, and that has nothing to do with root, permissions, or users at all.


Meanwhile, the EDB installer by default installs both app and data to a 
directory that requires root - so I assume it runs as root too - and 
nobody's complained.


However, if this is a sticking point, I'd have no problem adding a --user 
option that would default to _postgres (underscore-prefixed usernames are 
the Apple standard).


3. The current formula (TCF) spits out instructions telling you how to 
initdb, but they're easy to overlook.  I'm happy to add an option if 
necessary, and might do it anyway.


4. TCF also spits out instructions for adding Postgres to launchctl (Mac's 
version of /etc/init.d or Windows Services), rather than doing it for you, 
but again, I'd happily add the option.  (I'm checking with Homebrew folks to 
see if there's some dictum against that; it's a common pattern to put 
launchctl in the instructions, but IME the usability is poor.)


5. TCF doesn't update your shared memory settings. Again, happy to add that.

6. TCF doesn't look for existing installs. This is especially a problem on 
Lion, since Apple bundles PG 8.4, either client or server IIUC, and although 
/usr/local/bin is in your PATH, it comes *after* /usr/bin (grumble), so 
you'll either have the homebrew server fail to launch (since port 5432 is in 
use), or you'll have an outdated client version.


In IRC, both users had actually installed the EDB version months ago and 
forgotten about it, but over time, Lion users will grow, since all new Macs 
come with only Lion.  There are several ways to address this; my preference 
is to have homebrew warn about existing installs but take care of any magic 
to make them go away, a la 
http://nextmarvel.net/blog/2011/09/brew-install-postgresql-on-os-x-lion/.


7. There's no homebrew formula for pgadmin. I've never built it, and might 
be able to add that, but probably not right away.


8. There might be other popular things that EDB's StackBuilder does.

9. EDB is an important contributor to the PG core community, and maybe the 
link juice/publicity is politically important.  Lemme know.


That's all I can think of... thoughts? Objections? Which do you think are 
prerequisites?


Jay Levitt


Re: [HACKERS] Switching to Homebrew as recommended Mac install?

2012-04-01 Thread Jay Levitt

Jay Levitt wrote:

POSSIBLE OBJECTIONS/PREREQUISITES


10. There is no homebrew support for multiple versions, and no current plans 
to add it (though it's on the wishlist). This means homebrew is only useful 
if I want to install a PostgreSQL thingie is the common Mac use case. If 
people often need to use specific older versions, to mirror their server 
configs, it's a problem.  It *might* be possible to hack this into our 
formula, but I'm not sure it's either doable or acceptable.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Switching to Homebrew as recommended Mac install?

2012-04-01 Thread Tom Lane
Jay Levitt jay.lev...@gmail.com writes:
 So with all respect and thanks to EDB for maintaining those installers, I'd 
 like to propose that homebrew become the recommended install method on Mac, 
 and I will update the Mac formula to overcome any current objections.

This proposal doesn't seem to me to have any chance at all of getting
accepted.  While you might not like the EDB installer, at least those
folks are active in the lists and accountable for whatever problems
their code has.  Who in heck is responsible for the homebrew
packaging, and do they answer questions in the PG lists?

ISTM a more useful response to the problems you've mentioned is to
suggest to the EDB folk that they'd better install somewhere else
than where the built-in Lion installation is.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Switching to Homebrew as recommended Mac install?

2012-04-01 Thread Dave Page
Hi

On Sun, Apr 1, 2012 at 4:14 PM, Jay Levitt jay.lev...@gmail.com wrote:
 The Mac installation docs currently recommend the EDB one-click installer as
 the first choice. While this does install pgadmin and some other
 refinements, it also is fairly confusing to troubleshoot:

 - By default, it installs to /Library/PostgreSQL, which is also (I think)
 where the Apple-supplied Lion install is

No, Apple's version is installed in /usr on mine.

 - The uninstaller is hidden in /Library/PostgreSQL, which (since Finder
 hides /Library by default) you're likely to go to via Terminal. But the
 uninstaller is a Mac app, so even if you find it you have to know to use
 open to run it, because Mac apps are really directories that the Finder
 abstracts away from you.

Yes.

 - The EDB docs are written against 8.4.

Only if you install 8.4. If you install 8.3 you get the 8.3 docs, 9.0
the 9.0 docs and so on.

 - There are, as @lluad points out, no fewer than eight ways to install
 Postgres on a Mac (fink, macports, homebrew, Lion default, build from
 source, EDB, and two other binary installers)

That isn't any more of a reason to discount the EDB installer than any other.

 - We have few Mac experts hanging out in #postgresql.

Not sure how this is relevant to the proposal.

 - We just had two folks within an hour, BOTH with conflicting installs of
 Postgres.

Not sure how that is relevant either. You can have conflicting
installation using any of the installation methods, including a
home-built source tree.

 1. homebrew installs everything under /usr/local and makes that
 user-writeable.  Sorry.  It does because most Mac users don't know how to
 edit PATH for GUI apps (it's in a .plist in a hidden directory in your home
 dir), and /usr/local is already in PATH by default.

Your reasoning doesn't make sense. Why does putting something in the
path require a directory to be world writeable.

In any case, the fact that Homebrew does that to /usr/local should be
enough to make any user run away screaming in terror. If it opens up a
security hole like that, what else does it do to break your system?

 2. The current formula installs Postgres as the desktop user, not as the
 _postgres role account.

That's not very helpful on shared machines - and whilst it may be fine
for developers etc, it's not the recommended way to setup PostgreSQL
for any kind of production use.

 I'm personally of the strong opinion that user-to-user privilege escalation
 attacks are NOT an issue on desktops; all important files are already owned
 by the desktop user. The attack vector is *maybe* root escalation attacks,
 but if you want root, it's so common for installers to ask permission that
 your malware could just ask.  The real attack vector is I'm in your
 browser, and that has nothing to do with root, permissions, or users at
 all.

I, and I suspect many others, would disagree that user to user
security is not important.

 Meanwhile, the EDB installer by default installs both app and data to a
 directory that requires root - so I assume it runs as root too - and
 nobody's complained.

No it doesn't. It installs the app to a root owned directory for
security, and the data goes in a postgres owned directory so it can
only be modified by the account the service runs under.

 4. TCF also spits out instructions for adding Postgres to launchctl (Mac's
 version of /etc/init.d or Windows Services), rather than doing it for you,
 but again, I'd happily add the option.  (I'm checking with Homebrew folks to
 see if there's some dictum against that; it's a common pattern to put
 launchctl in the instructions, but IME the usability is poor.)

 5. TCF doesn't update your shared memory settings. Again, happy to add that.

 6. TCF doesn't look for existing installs. This is especially a problem on
 Lion, since Apple bundles PG 8.4, either client or server IIUC, and although
 /usr/local/bin is in your PATH, it comes *after* /usr/bin (grumble), so
 you'll either have the homebrew server fail to launch (since port 5432 is in
 use), or you'll have an outdated client version.

They seem like a number of reasons not to use Homebrew too (at least
as it is now).

 In IRC, both users had actually installed the EDB version months ago and
 forgotten about it, but over time, Lion users will grow, since all new Macs
 come with only Lion.  There are several ways to address this; my preference
 is to have homebrew warn about existing installs but take care of any magic
 to make them go away, a la
 http://nextmarvel.net/blog/2011/09/brew-install-postgresql-on-os-x-lion/.

So you propose to make it silently disable existing servers? I know
various people whose machines would be broken by that, including mine.
We went to great lengths to allow side by side installations of
different versions, precisely because developers (and occasionally
users) need to use multiple versions to support current and future
versions of their applications, and to experiment with new features.

Re: [HACKERS] Speed dblink using alternate libpq tuple storage

2012-04-01 Thread Tom Lane
I've been thinking some more about the early-termination cases (where
the row processor returns zero or longjmps), and I believe I have got
proposals that smooth off most of the rough edges there.

First off, returning zero is really pretty unsafe as it stands, because
it only works more-or-less-sanely if the connection is being used in
async style.  If the row processor returns zero within a regular
PQgetResult call, that will cause PQgetResult to block waiting for more
input.  Which might not be forthcoming, if we're in the last bufferload
of a query response from the server.  Even in the async case, I think
it's a bad design to have PQisBusy return true when the row processor
requested stoppage.  In that situation, there is work available for the
outer application code to do, whereas normally PQisBusy == true means
we're still waiting for the server.

I think we can fix this by introducing a new PQresultStatus, called say
PGRES_SUSPENDED, and having PQgetResult return an empty PGresult with
status PGRES_SUSPENDED after the row processor has returned zero.
Internally, there'd also be a new asyncStatus PGASYNC_SUSPENDED,
which we'd set before exiting from the getAnotherTuple call.  This would
cause PQisBusy and PQgetResult to do the right things.  In PQgetResult,
we'd switch back to PGASYNC_BUSY state after producing a PGRES_SUSPENDED
result, so that subsequent calls would resume parsing input.

With this design, a suspending row processor can be used safely in
either async or non-async mode.  It does cost an extra PGresult creation
and deletion per cycle, but that's not much more than a malloc and free.

Also, we can document that a longjmp out of the row processor leaves the
library in the same state as if the row processor had returned zero and
a PGRES_SUSPENDED result had been returned to the application; which
will be a true statement in all cases, sync or async.

I also mentioned earlier that I wasn't too thrilled with the design of
PQskipResult; in particular that it would encourage application writers
to miss server-sent error results, which would inevitably be a bad idea.
I think what we ought to do is define (and implement) it as being
exactly equivalent to PQgetResult, except that it temporarily installs
a dummy row processor so that data rows are discarded rather than
accumulated.  Then, the documented way to clean up after deciding to
abandon a suspended query will be to do PQskipResult until it returns
null, paying normal attention to any result statuses other than
PGRES_TUPLES_OK.  This is still not terribly helpful for async-mode
applications, but what they'd probably end up doing is installing their
own dummy row processors and then flushing results as part of their
normal outer loop.  The only thing we could do for them is to expose
a dummy row processor, which seems barely worthwhile given that it's
a one-line function.

I remain of the opinion that PQgetRow/PQrecvRow aren't adding much
usability-wise.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] measuring lwlock-related latency spikes

2012-04-01 Thread Simon Riggs
On Sun, Apr 1, 2012 at 1:34 PM, Robert Haas robertmh...@gmail.com wrote:
 On Sun, Apr 1, 2012 at 7:07 AM, Simon Riggs si...@2ndquadrant.com wrote:
 First, we need to determine that it is the clog where this is happening.

 I can confirm that based on the LWLockIds.  There were 32 of them
 beginning at lock id 81, and a gdb session confirms that
 ClogCtlData-shared-buffer_locks[0..31] point to exact that set of
 LWLockIds.

 Also, you're assuming this is an I/O issue. I think its more likely
 that this is a lock starvation issue. Shared locks queue jump
 continually over the exclusive lock, blocking access for long periods.

 That is a possible issue in general, but I can't see how it could be
 happening here, because the shared lock is only a mechanism for
 waiting for an I/O to complete.  The backend doing the I/O grabs the
 control lock, sets a flag saying there's an I/O in progress, takes the
 buffer lock in exclusive mode, and releases the control lock.  The
 shared locks are taken when someone notices that the flag is set on a
 buffer they want to access.  So there aren't any shared lockers until
 the buffer is already locked in exclusive mode.  Or at least I don't
 think there are; please correct me if I'm wrong.

Agreed.

Before the exclusive lock holder releases the lock it must acquire the
control lock in exclusive mode (line 544).

So lock starvation on the control lock would cause a long wait after
each I/O, making it look like an I/O problem.

Anyway, just to note that it might not be I/O and we need to find out.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] measuring lwlock-related latency spikes

2012-04-01 Thread Greg Stark
On Sun, Apr 1, 2012 at 10:27 PM, Simon Riggs si...@2ndquadrant.com wrote:
 So lock starvation on the control lock would cause a long wait after
 each I/O, making it look like an I/O problem.

Except that both of the locks involved in his smoking gun occur
*after* the control lock has already been acquired. The one that's
actually being blocked for a long time is in fact acquiring a shared
lock which the queue jumping couldn't be hurting.

We know you're convinced about the queue jumping being a problem, and
it's definitely a plausible problem, but I think you need exactly the
kind of instrumentation Robert is doing here to test that theory.
Without it even if everyone agreed it was a real problem we would have
no idea whether a proposed change fixed it.


Fwiw this instrumentation is *amazing*. As a user this kind of rare
random stall is precisely the kind of thing that totally kills me. I
would so much rather run a web site on a database where each query
took twice as long but it guaranteed that no query would take over a
second than one that was twice as fast on average but occasionally
gets stuck for 12s.


-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] log chunking broken with large queries under load

2012-04-01 Thread Andrew Dunstan
Some of my PostgreSQL Experts colleagues have been complaining to me 
that servers under load with very large queries cause CSV log files that 
are corrupted, because lines are apparently multiplexed. The log 
chunking protocol between the errlog routines and the syslogger is 
supposed to prevent that, so I did a little work to try to reproduce it 
in a controlled way. On my dual quad xeon setup, this script:


   #!/bin/sh
   par=$1
   seq=$2

   sed 2000q /usr/share/dict/words  words

   psql -q -c 'drop table if exists foo'
   psql -q -c 'create table foo (t text)'

   echo '\set words `cat words`'  wordsin.sql
   echo 'prepare fooplan (text) as insert into foo values ($1);' 
   wordsin.sql

   for i in `seq 1 $seq`; do
  echo execute fooplan(:'words');  wordsin.sql
   done

   for i in `seq 1 $par`; do
  psql -q -t -f wordsin.sql 
   done
   wait

called with parameters of 100 and 50 (i.e. 100 simultaneous clients each 
doing 50 very large inserts) is enough to cause CSV log corruption quite 
reliably on PostgreSQL 9.1.


This is a serious bug. I'm going to investigate, but it's causing major 
pain, so anyone else who has any ideas is welcome to chime in.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] measuring lwlock-related latency spikes

2012-04-01 Thread Greg Stark
On Sun, Apr 1, 2012 at 4:05 AM, Robert Haas robertmh...@gmail.com wrote:
 My guess based on previous testing is
 that what's happening here is (1) we examine a tuple on an old page
 and decide we must look up its XID, (2) the relevant CLOG page isn't
 in cache so we decide to read it, but (3) the page we decide to evict
 happens to be dirty, so we have to write it first.

Reading the code one possibility is that in the time we write the
oldest slru page another process has come along and redirtied it. So
we pick a new oldest slru page and write that. By the time we've
written it another process could have redirtied it again. On a loaded
system where the writes are taking 100ms or more it's conceivable --
barely -- that could happen over and over again hundreds of times.

In general the locking and reasoning about concurrent attempts to read
pages here makes my head swim. It looks like even if there's a lot of
contention for the same page or same slot it shouldn't manifest itself
that way but it seems like the kind of logic with multiple locks and
retries that is prone to priority inversion type problems. I wonder if
more detailed instrumentation showing the sequence of operations taken
while holding a lock that somebody got stuck on would help.

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Speed dblink using alternate libpq tuple storage

2012-04-01 Thread Marko Kreen
On Sun, Apr 01, 2012 at 05:51:19PM -0400, Tom Lane wrote:
 I've been thinking some more about the early-termination cases (where
 the row processor returns zero or longjmps), and I believe I have got
 proposals that smooth off most of the rough edges there.
 
 First off, returning zero is really pretty unsafe as it stands, because
 it only works more-or-less-sanely if the connection is being used in
 async style.  If the row processor returns zero within a regular
 PQgetResult call, that will cause PQgetResult to block waiting for more
 input.  Which might not be forthcoming, if we're in the last bufferload
 of a query response from the server.  Even in the async case, I think
 it's a bad design to have PQisBusy return true when the row processor
 requested stoppage.  In that situation, there is work available for the
 outer application code to do, whereas normally PQisBusy == true means
 we're still waiting for the server.
 
 I think we can fix this by introducing a new PQresultStatus, called say
 PGRES_SUSPENDED, and having PQgetResult return an empty PGresult with
 status PGRES_SUSPENDED after the row processor has returned zero.
 Internally, there'd also be a new asyncStatus PGASYNC_SUSPENDED,
 which we'd set before exiting from the getAnotherTuple call.  This would
 cause PQisBusy and PQgetResult to do the right things.  In PQgetResult,
 we'd switch back to PGASYNC_BUSY state after producing a PGRES_SUSPENDED
 result, so that subsequent calls would resume parsing input.
 
 With this design, a suspending row processor can be used safely in
 either async or non-async mode.  It does cost an extra PGresult creation
 and deletion per cycle, but that's not much more than a malloc and free.

I added extra magic to PQisBusy(), you are adding extra magic to
PQgetResult().  Not much difference.

Seems we both lost sight of actual usage scenario for the early-exit
logic - that both callback and upper-level code *must* cooperate
for it to be useful.  Instead, we designed API for non-cooperating case,
which is wrong.

So the proper approach would be to have new API call, designed to
handle it, and allow early-exit only from there.

That would also avoid any breakage of old APIs.  Also it would avoid
any accidental data loss, if the user code does not have exactly
right sequence of calls.

How about PQisBusy2(), which returns '2' when early-exit is requested?
Please suggest something better...


-- 
marko


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Speed dblink using alternate libpq tuple storage

2012-04-01 Thread Tom Lane
Marko Kreen mark...@gmail.com writes:
 Seems we both lost sight of actual usage scenario for the early-exit
 logic - that both callback and upper-level code *must* cooperate
 for it to be useful.  Instead, we designed API for non-cooperating case,
 which is wrong.

Exactly.  So you need an extra result state, or something isomorphic.

 So the proper approach would be to have new API call, designed to
 handle it, and allow early-exit only from there.

 That would also avoid any breakage of old APIs.  Also it would avoid
 any accidental data loss, if the user code does not have exactly
 right sequence of calls.

 How about PQisBusy2(), which returns '2' when early-exit is requested?
 Please suggest something better...

My proposal is way better than that.  You apparently aren't absorbing my
point, which is that making this behavior unusable with every existing
API (whether intentionally or by oversight) isn't an improvement.
The row processor needs to be able to do this *without* assuming a
particular usage style, and most particularly it should not force people
to use async mode.

An alternative that I'd prefer to that one is to get rid of the
suspension return mode altogether.  However, that leaves us needing
to document what it means to longjmp out of a row processor without
having any comparable API concept, so I don't really find it better.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Event scheduling

2012-04-01 Thread Joe Van Dyk
Anyone else want event scheduling / cron / temporal triggers in
postgresql? http://dev.mysql.com/doc/refman/5.1/en/events-overview.html
shows how it works in mysql.

Can we throw money at someone to get this in postgres? Is there work
already being done on this?

Being able to regularly execute a postgres function every so often
would be really nice. It would simplify lots of deployments.

Thanks,
Joe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Switching to Homebrew as recommended Mac install?

2012-04-01 Thread Jay Levitt

Dave Page wrote:
 It seems to me that most of your arguments against the installers are
 based on incorrect understanding or information, and most of your
 arguments for Homebrew actually come across as arguments against!

You're right about the former - and as to the latter, they *were* arguments 
against (potential objections).  I try to pre-argue against my own 
proposals to save everyone time; if I can still prevail, I must have a damn 
good idea :)


At this point I agree with you, but I'm still going to go into detail, 
because I think there are two markets for Postgres, and the database 
community has been so focused around enterprise for so long that you're 
missing opportunities with web startups. I'd love to help bridge the gap, 
having jumped straight from big-iron PL/I to ooh-Ruby-is-shiny. And web 
startups develop on Mac laptops. They just do. So if it helps you to imagine 
me as a 20something I'm a Mac hipster, working on some hot Facebook/mobile 
app with funding from Spark Capital, do that. Lord knows it helps me.


 - We have few Mac experts hanging out in #postgresql.
 Not sure how this is relevant to the proposal.

The impetus for the idea was that there seems to be a steady stream of 
novice PG users on Mac who come into #postgresql with installation problems, 
which is bad enough as an out-of-box experience - but worse is that there 
are rarely folks around who can help. (Of course, I'm extrapolating; every 
time *I'm* in IRC and see this, there's someone who can help. But you know 
what I mean.)


And (although my proposal started with documentation) I'm of the firm 
opinion that there's no such thing as a documentation error; a user 
problem is a software problem.  Humans will click buttons before they'll 
read, developers are humans, and no amount of RTFM will ever fix that. If we 
can make installers smarter, that's way better than troubleshooting guides, 
IRC, mailing lists, etc. So that's where I was coming from.


I didn't realize that you were actively maintaining the EDB installer (see 
below for the 8.4 doc explanation); obviously, if you can improve that, it's 
the best solution and we should, if anything, recommend it MORE vigorously. 
Still, there's a growing community of developers who expect brew install 
to work, and I do want to fix it for them.  The EDB installer will always be 
a one-off experience; most of the other servers you install will be through 
a package manager, and homebrew's popularity (despite its youth) is 
impressive.  Both of my n=2 data points had run across PG a while back, 
installed it with the one-click to try it out, forgotten about it, done 
brew install postgresql today, and naturally ran into problems.


 - By default, it installs to /Library/PostgreSQL, which is also (I think)
 where the Apple-supplied Lion install is
 No, Apple's version is installed in /usr on mine.

Ah hah. I suppose only the Apple .plist is stored under /Library, then. Let 
me amend that to this made everyone in IRC, and probably many other 
non-Mac-expert troubleshooters, assume that this is an Apple-installed 
package. It'd be great for this to go somewhere that feels like Oh, this 
was installed by you; /Library feels kinda weird for a server, though I can 
understand your reasoning. Maybe even /Library/EnterpriseDB/PostgreSQL to 
make it obvious?


 - The uninstaller is hidden in /Library/PostgreSQL, which (since Finder
 hides /Library by default) you're likely to go to via Terminal. But the
 uninstaller is a Mac app, so even if you find it you have to know to use
 open to run it, because Mac apps are really directories that the Finder
 abstracts away from you.
 Yes.

How about a one-liner bash script uninstall-postgresql that does nothing 
but open uninstall-postgresql.app?


 - The EDB docs are written against 8.4.
 Only if you install 8.4. If you install 8.3 you get the 8.3 docs, 9.0
 the 9.0 docs and so on.

No, I meant on the web:

http://www.enterprisedb.com/resources-community/pginst-guide

That's what made me assume that the installer wasn't maintained (except as 
to repackaging new PG versions, obviously). It's obviously not hard to 
replace 8.3 with 9.1 when you read it, but it still leaves an impression 
akin to This web site works best with IE7 and above. Allow me to now 
replace most of this thread with hey, you might wanna update that page.


 - There are eight ways to install Postgres on a Mac
 That isn't any more of a reason to discount the EDB installer than any other.

Nope, just an argument that the recommended installer should handle that nicely.

 - We just had two folks within an hour, BOTH with conflicting installs of
 Postgres.
 Not sure how that is relevant either. You can have conflicting
 installation using any of the installation methods, including a
 home-built source tree.

Right, but I suspect this is a common problem - not only have I seen it in 
IRC but 3 or 4 times in my 12-person startup, which is kinda amazing given 
that we've 

Re: [HACKERS] Switching to Homebrew as recommended Mac install?

2012-04-01 Thread Jay Levitt

Tom Lane wrote:

While you might not like the EDB installer, at least those
folks are active in the lists and accountable for whatever problems
their code has.  Who in heck is responsible for the homebrew
packaging, and do they answer questions in the PG lists?


Just for general knowledge... Who's responsible is whoever wants to be; 
homebrew is open source, and with a github-based workflow, it's trivial for 
them to accept pull requests.  On the 1967 formulas (packages) in the repo, 
there have been 1759 contributors.  I was volunteering to be the maintainer 
and liaison if we did this; I'm pretty good at email and IRC.


It's actually pretty clever and elegant - homebrew itself uses git and 
github for formula updates and distribution, and is written in ruby1.8 which 
ships with all Macs. /usr/local is a git repo, brew update is mostly git 
pull, and brew search checks for new pull requests if it doesn't find a 
matching formula. The whole thing's all of 1500 lines of code, and you saw 
what formulas look like.


Jay

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Switching to Homebrew as recommended Mac install?

2012-04-01 Thread David Johnston
On Apr 1, 2012, at 21:50, Jay Levitt jay.lev...@gmail.com wrote:

 Tom Lane wrote:
 While you might not like the EDB installer, at least those
 folks are active in the lists and accountable for whatever problems
 their code has.  Who in heck is responsible for the homebrew
 packaging, and do they answer questions in the PG lists?

 
 Just for general knowledge... Who's responsible is whoever wants to be; 
 homebrew is open source, and with a github-based workflow, it's trivial for 
 them to accept pull requests.  On the 1967 formulas (packages) in the repo, 
 there have been 1759 contributors.  I was volunteering to be the maintainer 
 and liaison if we did this; I'm pretty good at email and IRC.
 
 It's actually pretty clever and elegant - homebrew itself uses git and github 
 for formula updates and distribution, and is written in ruby1.8 which ships 
 with all Macs. /usr/local is a git repo, brew update is mostly git pull, 
 and brew search checks for new pull requests if it doesn't find a matching 
 formula. The whole thing's all of 1500 lines of code, and you saw what 
 formulas look like.
 
 Jay
 
 

You seem highly approving of homebrew and seem willing to develop and support 
it.  I guess the question to be asked is what requirements you would expect to 
have to meet before the Mac Downloads section would list your installer routine 
along with the three already present?  Aside from that unless you are really 
intent on trying to prove yourself to be the best if you are trying to overcome 
shortcomings of the existing installers it would still be nice to let them know 
how you feel things could be improved for the community/user sub-set you belong 
to.  

As a Windows developer (though production is on Linux) I get where you are 
coming from with respect to user permissions and the like - what is desirable 
in a development and in production do differ and so having different 
installation routines for them makes some sense.  Until your developers go to 
install on the production server and do not realize that they should be doing 
something different in order to make the server more secure than their 
development environment.

From what I follow I think you have really good ideas and sound reasoning.  
You do not need permission to contribute to the community in the way you seek 
so what is it that you are really asking for?  From the sound of things your 
primary focus is not in supporting the PostgreSQL community via providing 
services to others or developing new tools.  When brew is replaced by 
something more popular do you think you will continue to maintain the recipie 
or is it going to end up stuck showing us how to install version 9.3 or 
earlier.  I'm beyond my element here but the current installer maintainers are 
doing so in addition to their other, more regular, contributions.  That said, 
the contribution, even if it did stall in the future, would still be welcomed 
and if it is found to be very useful someone would likely pickup the torch as 
long as it is released under the same terms as PostgreSQL itself.

Just trying to bridge an apparent gap since the original e-mail seems to have 
come across as too adversarial that the underlying thoughts have been 
overlooked.  Trying to contribute in my own way with my current resources.

David J.
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Switching to Homebrew as recommended Mac install?

2012-04-01 Thread David Johnston
On Apr 1, 2012, at 13:14, Jay Levitt jay.lev...@gmail.com wrote:

 The Mac installation docs currently recommend the EDB one-click installer as 
 the first choice. While this does install pgadmin and some other refinements, 
 it also is fairly confusing to troubleshoot:
 

The items are not numbered and it is impossible to avoid special ordering.  
There are three options - and yes EDD is listed at the top of the page - but 
nothing else implies any kind of order and given that they are dealing with 
different ways to package if someone prefers MacPorts or Fink the fact they are 
listed lower shouldn't induce them to pick the unfamiliar one first.

David J.
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers