Re: [HACKERS] psql \set vs \copy - bug or expected behaviour?
On 17/08/12 18:38, Tom Lane wrote: Bruce Momjianbr...@momjian.us writes: On Fri, Oct 21, 2011 at 05:31:41PM -0400, Robert Haas wrote: I'm not sure whether that's a bug per se, but I can see where a behavior change might be an improvement. I did some research on this and learned a little more about flex rules. Turns out we can allow variable substitution in psql whole-line commands, like \copy and \!, by sharing the variable expansion flex rules with the code that does argument processing. Well, it'd be nice to allow substitution there ... What we can't easily do is to allow quotes to prevent variable substitution in these whole-line commands because we can't process the quotes because that will remove them. ... but if there is then no way to prevent it, that's absolutely unacceptable. If I'm understanding this correctly, \copy parsing just passes the query part unaltered as part of a COPY statement back into the top-level parser. Likewise with the \!shell stuff (but presumably to execve). To handle variable-substitution correctly for \copy we'd need to duplicate the full parsing for COPY. For \! we'd need something which understood shell-syntax (for the various shells out there). Ick. Or you'd need a separate variable-bracketing {{:x}} syntax that could work like reverse dollar-quoting. Also Ick. As far as we know this has only inconvenienced one person (me) badly enough to report a maybe-bug. Thanks for trying Bruce, but I fear this is one itch that'll go unscratched. Rest assured I'm not about to storm off and replace all my installations with MySQL :-) -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Escaping : in .pgpass - code or docs bug?
On 19/12/11 16:48, Robert Haas wrote: On Sat, Dec 17, 2011 at 3:27 AM, Ross Reedstromreeds...@rice.edu wrote: This should either be fixed by changing the documentation to say to not escape colons or backslashes in the password part, only, or modify this function (PasswordFromFile) to silently unescape the password string. It already copies it. My vote is for a doc correction in the back-branches and a behavior change in master. Seems sensible - presumably mentioning this will be corrected in 9.2? It's clearly not what you'd call urgent since nobody else seems to have noticed before now. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Escaping : in .pgpass - code or docs bug?
According to the docs [1], you should escape embedded colons in .pgpass (fair enough). Below is PG 9.1.1 user = te:st, db = te:st, password = te:st $ cat ~/.pgpass *:*:te:st:te:st:te:st $ psql91 -U te:st -d te:st te:st= $ cat ~/.pgpass *:*:te\:st:te\:st:te:st $ psql91 -U te:st -d te:st te:st= $ cat ~/.pgpass *:*:te\:st:te\:st:te\:st $ psql91 -U te:st -d te:st psql: FATAL: password authentication failed for user te:st password retrieved from file /home/richardh/.pgpass I'm a bit puzzled how it manages without the escaping in the first case. There's a lack of consistency though that either needs documenting or fixing. [1] http://www.postgresql.org/docs/9.1/static/libpq-pgpass.html -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Time bug with small years
On 24/11/11 04:45, Rod Taylor wrote: I have no idea what is going on with the minutes/seconds, particularly for years under 1895 where it gets appended onto the timezone component? sk_test=# select version(); version PostgreSQL 9.1.1 on x86_64-pc-linux-gnu, compiled by gcc-4.6.real (Ubuntu/Linaro 4.6.1-9ubuntu3) 4.6.1, 64-bit (1 row) -- uname -a output: Linux rbt-dell 3.0.0-13-generic #22-Ubuntu SMP Wed Nov 2 13:27:26 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux sk_test=# select '1894-01-01'::timestamp with time zone; timestamptz -- 1894-01-01 00:00:00-05:17:32 Floating-point timestamps? Although I thought integer was the default for 9.x - hmm INSTALL says since 8.4 -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] psql \set vs \copy - bug or expected behaviour?
It looks like \copy is just passing the text of the query unadjusted to COPY. I get a syntax error on :x with the \copy below on both 9.0 and 9.1 === test script === \set x '''HELLO''' -- Works \echo :x -- Works \o '/tmp/test1.txt' COPY (SELECT :x) TO STDOUT; -- Doesn't work \copy (SELECT :x) TO '/tmp/test2.txt' === end script === -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] A different approach to extension NO USER DATA feature
On 06/02/11 18:23, Tom Lane wrote: After a bit of thought I believe that we can fix this if we are willing to teach pg_dump explicitly about extension configuration tables. The behavior we want for those is for the table schema definition to never be dumped (the table should always be created by CREATE EXTENSION), but for some subset of the table data to get dumped, excluding any system-provided rows. [snip] pg_extension_partial_dump (table_name regclass, where_condition text) Possible alternative approach? 1. Extension provides list of config tables/views/set-returning functions to be dumped via e.g. my_config_tables() 2. They get dumped, but each as a TEMP TABLE (need unique names for multiple extensions though). 3. On restore, tables are created and populated, then read_your_config(ARRAY-OF-TABLE-NAMES) is called in the extension. This separates the configuration-for-user from configuration-for-extension. It allows the extension to decide whether to load the new config or reject it. It lets you test/demonstrate multiple configurations fairly simply. The system_data column scenario can then be a default implementation of read_your_config(). -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Hypothetical Indexes - PostgreSQL extension - PGCON 2010
On 03/12/10 08:14, Jeroen Vermeulen wrote: On 2010-12-02 00:48, Ana Carolina Brito de Almeida wrote: We would like to inform you all that our extension to PostgreSQL, that includes hypothetical indexes (and soon index self-tuning), is available through a sourgeforge project. Looking at the sourceforge page, I'm left with one burning question: what are they for? I believe they're for performance testing. Add hypothetical index (takes very little time). Check estimated costs with EXPLAIN. If good, add real index (takes lots of time). Of course, they're also good for indexing hypothetical data ;-) -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Domains versus arrays versus typmods
On 20/10/10 01:47, Robert Haas wrote: On Tue, Oct 19, 2010 at 6:14 PM, Tom Lanet...@sss.pgh.pa.us wrote: Comments? It might be reasonable to back-patch whatever we decide on into 9.0, because it is so new, but I would be reluctant to go back further unless we have some evidence that it's bothering people. It seems to me that this can could have a lot of worms in it, and I fear that there could be several rounds of fixes, which I would rather not inflict on users of supposedly-stable branches. The work-around I applied when I stumbled across this was just to apply an explicit cast before my function's RETURN. That neatly solves my particular problem (which I at first thought was a formatting issue somewhere in my app). The real danger with this is the opportunity to end up with occasional bad data in tables, quite possibly unnoticed. If I'd come across this in an existing system rather than a new app I'm pretty sure it would have confused me for a lot longer than it did. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] reducing NUMERIC size for 9.1
On 16/07/10 13:44, Brendan Jurd wrote: pg_column_size() did return the results I was expecting. pg_column_size(0::numeric) is 8 bytes on 8.4 and it's 6 bytes on HEAD with your patch. At this scale we should be seeing around 2 million bytes saved, but instead the tables are identical. Is there some kind of disconnect in how the new short numeric is making it to the disk, or perhaps another effect interfering with my test? You've probably got rows being aligned to a 4-byte boundary. You're probably not going to see any change unless you have a couple of 1-byte columns that get placed after the numeric. If you went from 10 bytes down to 8, that should be visible. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SHOW TABLES
On 15/07/10 19:44, Robert Haas wrote: On Jul 15, 2010, at 11:59 AM, Simon Riggssi...@2ndquadrant.com wrote: I imagined that we would do something similar to EXPLAIN, a set of text rows returned. That seems rather wretched for machine-parsability, which I think is an important property for anything we do in this area. We need to think harder about how we could structure this to allow returning more than just a tabular result set while still allowing clients easy programmatic access to the underlying data. It should be possible to migrate \d options to using new outputs, when everything works in a useful manner. Probably not in this release. Feature sounds useful. I think our \dxx commands have grown a little unwieldy in the last version or two. Which is not to say you can take \d away :-) I was assuming the process would be something like: 1. Move existing \d queries into functions* 2. Convert psql to use those 3. Add SHOW xxx and have it return a single query Have it also issue NOTICE: from psql, try \dt for more info If/when we have multiple sets returned from one query it should be simple to provide something pretty close to \d... from a single command. Trying to format the data in the backend is probably just going to frustrate writers of different clients (of which I think we have quite a few now). * These functions could then be back-ported as an admin-pack too for clients/apps that wanted cross-version compatibility for these sorts of things. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SHOW TABLES
On 15/07/10 20:43, Greg Sabino Mullane wrote: -BEGIN PGP SIGNED MESSAGE- Hash: RIPEMD160 I was assuming the process would be something like: 1. Move existing \d queries into functions* 2. Convert psql to use those Oops! There's goes your ability to handle older versions of Postgres from the existing psql Arse. It's little details like this that demonstrate why I'm a user and not a hacker :-) -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] standard_conforming_strings
On 14/07/10 15:48, Robert Haas wrote: On Fri, Jan 29, 2010 at 10:02 PM, Josh Berkusj...@agliodbs.com wrote: An actual plan here might look like let's flip it before 9.1alpha1 so we can get some alpha testing cycles on it ... Hey, let's flip it in 9.1 CF 1, so that we can have some alpha testing cycles on it. Should we do this? Patch attached. Any reason not to add a line to the 9.0 docs/release notes saying WARNING: The PGDG currently plan to change this setting's default in 9.1? -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Check constraints on non-immutable keys
On 30/06/10 17:11, Robert Haas wrote: On Wed, Jun 30, 2010 at 11:49 AM, Tom Lanet...@sss.pgh.pa.us wrote: Robert Haasrobertmh...@gmail.com writes: My scintillating contribution to this discussion is the observation that unrestorable dumps suck. No doubt, but is this a real problem in practice? Magnus tells me that that was what prompted his original email. I've done it. Luckily only with a small and fully functioning database so I could drop the constraint and re-dump it. Had a recent_date domain that was making sure new diary-style entries had a plausible date. Of course, two years later my dump can no longer restore the oldest record :-( IMHO The real solution would be something that could strip/rewrite the constraint on restore rather than trying to prevent people being stupid though. People *will* just tag their functions as immutable to get them to work. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Check constraints on non-immutable keys
On 30/06/10 18:11, Magnus Hagander wrote: On Wed, Jun 30, 2010 at 18:33, Richard Huxtond...@archonet.com wrote: IMHO The real solution would be something that could strip/rewrite the constraint on restore rather than trying to prevent people being stupid though. People *will* just tag their functions as immutable to get them to work. Are you sure? The people most likely to just tag their functions as immutable, are the same ones most unlikely to know *how to do that*. At least for what I think is the majority case - which is calling builtin functions. People just cut and paste this stuff from ancient blog entries. Understanding is not necessary. Hell, I do it sometimes if I'm dealing with something like LDAP where I don't really have a deep knowledge of the situation. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration
On 26/02/10 08:33, Greg Smith wrote: There are a number of HS tunables that interact with one another, and depending your priorities a few ways you can try to optimize the configuration for what I expect to be common use cases for this feature. I've written a blog entry at http://blog.2ndquadrant.com/en/2010/02/tradeoffs-in-hot-standby-deplo.html that tries to explain all that background clearly, It did too. Thanks for the nice summary people can be pointed at. I'm not sure what you might be expecting from the above combination, but what actually happens is that many of the SELECT statements on the table *that isn't even being updated* are canceled. You see this in the logs: Hmm - this I'd already figured out for myself. It's just occurred to me that this could well be the case between databases too. Database A gets vacuumed, B gets its queries kicked off on the standby. Granted lots of people just have the one main DB, but even so... LOG: restored log file 000100A5 from archive ERROR: canceling statement due to conflict with recovery DETAIL: User query might have needed to see row versions that must be removed. STATEMENT: SELECT sum(abalance) FROM pgbench_accounts; Basically, every time a WAL segment appears that wipes out a tuple that SELECT expects should still be visible, because the dead row left behind by the update has been vacuumed away, the query is canceled. This happens all the time the way I've set this up, and I don't feel like this is a contrived demo. Having a long-running query on the standby while things get updated and then periodically autovacuumed on the primary is going to be extremely common in the sorts of production systems I expect want HS the most. I can pretty much everyone wanting HS+SR. Thousands of small DBs running on VMs for a start. Free mostly-live backup? Got to be a winner. Dumb non-hacker question: why do we cancel all transactions rather than just those with ACCESS SHARE on the vacuumed table in question? Is it the simple fact that we don't know what table this particular section of WAL affects, or is it the complexity of tracking all this info? If you're running a system that also is using Streaming Replication, there is a much better approach possible. Requires keep-alives with timestamps to be added to sync rep feature If those keep-alives flowed in both directions, and included both timestamps *and* xid visibility information, the master could easily be configured to hold open xid snapshots needed for long running queries on the standby when that was necessary. Presumably meaning we need *another* config setting to prevent excessive bloat on a heavily updated table on the master. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration
On 26/02/10 14:10, Heikki Linnakangas wrote: Ideally the standby would stash away the old pages or tuples somewhere so that it can still access them even after replaying the WAL records that remove them from the main storage. I realize that's not going to happen any time soon because it's hard to do, but that would really be the most robust fix possible. Something like snapshotting a filesystem, so updates continue while you're still looking at a static version. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration
On 26/02/10 14:45, Heikki Linnakangas wrote: Richard Huxton wrote: On 26/02/10 08:33, Greg Smith wrote: I'm not sure what you might be expecting from the above combination, but what actually happens is that many of the SELECT statements on the table *that isn't even being updated* are canceled. You see this in the logs: Hmm - this I'd already figured out for myself. It's just occurred to me that this could well be the case between databases too. Database A gets vacuumed, B gets its queries kicked off on the standby. No, it's per-database already. Only queries in the same database are canceled. That's a relief. Dumb non-hacker question: why do we cancel all transactions rather than just those with ACCESS SHARE on the vacuumed table in question? Is it the simple fact that we don't know what table this particular section of WAL affects, or is it the complexity of tracking all this info? The problem is that even if transaction X doesn't have an (access share) lock on the vacuumed table at the moment, it might take one in the future. Simon proposed mechanisms for storing the information about vacuumed tables in shared memory, so that if X takes the lock later on it will get canceled at that point, but that's 9.1 material. I see - we'd need to age the list of vacuumed tables too, so when the oldest transactions complete the correct flags get cleared. Can we not wait to cancel the transaction until *any* new lock is attempted though? That should protect all the single-statement long-running transactions that are already underway. Aggregates etc. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration
Replying to my own post - first sign of madness... Let's see if I've got the concepts clear here, and hopefully my thinking it through will help others reading the archives. There are two queues: 1. Cleanup on the master 2. Replay on the slave Running write queries on the master adds to both queues. Running (read-only) queries on the slave prevents you removing from both queues. There are two interesting measurements of age/size: 1. Oldest item in / length of queue (knowable) 2. How long will it take to clear the queue (estimable at best) You'd like to know #2 to keep up with your workload. Unfortunately, you can't for certain unless you have control over new incoming queries (on both master and slave). You might want four separate GUCs for the two measurements on the two queues. We currently have two that (sort of) match #1 Oldest item (vacuum_defer_cleanup_age, max_standby_delay). Delaying replay on a slave has no effect on the master. If a slave falls too far behind it's responsible for catch-up (via normal WAL archives). There is no point in delaying cleanup on the master unless it's going to help one or more slaves. In fact, you don't want to start delaying cleanup until you have to, otherwise you're wasting your delay time. This seems to be the case with vacuum_defer_cleanup_age. If I have a heavily-updated table and I defer vacuuming then when any given query starts on the slave it's going to be half used up already. There's also no point in deferring cleanup on the master if the standby is already waiting on a conflict that will cause its queries to be cancelled anyway. Not only won't it help, but it might make things worse since transactions will be cancelled, the conflict will be replayed and (presumably) queries will be re-submitted only to be cancelled again. This is what Greg Smith's discussion of the keep-alives was about. Giving the master enough information to be smarter about cleanup (and making the conflicts more fine-grained). The situation with deferring on one or both ends of process just gets more complicated with multiple slaves. There's all sorts of unpleasant feedback loops I can envisage there. For the case of single slave being used to run long reporting queries the ideal scenario would be the following. Master starts deferring vacuum activity just before the query starts. When that times out, the slave will receive the cleanup info, refuse to replay it and start its delay. This gives you a total available query time of: natural time between vacuums + vacuum delay + WAL transfer time + standby delay I can think of five useful things we should be doing (and might be already - don't know). 1. On the master, deduce whether the slave is already waiting on a query. If so, don't bother delaying cleanup. Clearly you don't want to be signalling hundreds of times a second though. Does the slave pause fetching via streaming replication if replay is blocked on a query? Could we signal half-way to max-age or some such? 2. Perhaps simpler than trying to make the master smarter, just allow SET this_transaction_is_probably_a_long_one=true on the slave. That (a) clears the queue on the slave and (b) sends the signal to the master which then starts deferring vacuum. 3. Do a burst of cleanup activity on the master after blocking. This should concentrate conflicts together when they reach the slave. Perhaps vacuum_defer_cleanup_age should be vacuum_deferred_queue_size and measure the amount of work to do, rather than the max age of the oldest cleanup (if I've understood correctly). 4. Do a burst of replay on the slave after blocking. Perhaps every time it cancels a transaction it should replay at least half the queued WAL before letting new transactions start. Or perhaps it replays any vacuum activity it comes across and then stops. That should sync with #2 assuming the slave doesn't lag the master too much. 5. I've been mixing defer and delay, as do the docs. We should probably settle on one or the other. I think defer conveys the meaning more precisely, but what about non-native English speakers? -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] plperl.on_init - bug or just me?
From memory and the thread below, I thought one of the key uses was to let me use a module from trusted plperl. http://archives.postgresql.org/pgsql-hackers/2010-02/msg00167.php The example below has a TestModule that just exports one sub - visible from plerlu but not plperl. Presumably Safe just clamps down and my sub isn't marked as acceptable. Is this intended, or am I doing something stupid? postgresql.conf: plperl.on_init = 'use lib /home/richardh/dev/; use TestModule qw(add_one);' -- tries to call TestModule::add_one richardh=# SELECT add_one(1); ERROR: Undefined subroutine TestModule::add_one called at line 1. CONTEXT: PL/Perl function add_one -- tries to call the exported main::add_one richardh=# SELECT add_one_e(1); ERROR: Undefined subroutine main::add_one called at line 1. CONTEXT: PL/Perl function add_one_e -- plperlu - TestModule::add_one richardh=# SELECT add_one_u(1); add_one_u --- 2 (1 row) -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] plperl.on_init - bug or just me?
On 25/02/10 17:10, Andrew Dunstan wrote: Richard Huxton wrote: Presumably Safe just clamps down and my sub isn't marked as acceptable. Is this intended, or am I doing something stupid? It's intended (at least by me). Also, please see the recent discussion about loading extra stuff into the Safe container. Ah - looks like I've missed a thread. At the very least that has been shelved for now. We're going to proceed with deliberation in this area. I'm quite concerned to make sure that we don't provide an opportunity for people to undermine the behaviour of the trusted language. Fair enough. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] FW: Unable to install PostgreSQL on Windows Server 2003 SP2
On 24/02/10 13:09, William wrote: Hello Pqsql-hacker, I have tried everything and ask many but cannot get my Installation of your PostgreSQL 8.4.1.1 to install. While you are having problems, take the opportunity to download 8.4.2 and work with that. That's the latest set of bugfixes. I following all the steps exactly but I cannot get the PostgresSQL to install on my Windows Server 2003 Datacenter Edition SP2. Is there a problem in installing the PostgreSQL on this version of Windows which is Hosted at a Host Provider. if not why will it not install. I have tried everything but will not install. That is why I am here asking you hoping you can get me past this. Following is the installer log files that shows everything that went on during the install. Have you read the logs? Searching for Error is the best way to start. The first occurrence in bitrock_installer.log is at line 3244. If you read the 20 or so lines before that you will see that is says it installed everything: Success. You can now start the database server using... It then fails when trying to grant access to the data directory for your service account (the user the database runs as). Granting service account access to the data directory (using cacls): processed dir: D:\APPS\PostgreSQL\8.4\data The data is invalid. Failed to grant service account access to the data directory (D:\APPS\PostgreSQL\8.4\data) After that, you get more errors because the service couldn't be started. The second logfile is identical to the first. So - either there is a bug in the installer, or your administrator account in the virtual server doesn't have permission to do this installation properly. Doesn't matter which from your point of view. I'm not a Windows expert regarding PostgreSQL, but there are some obvious things to try. First thing to do - check if PostgreSQL is still installed and that you have a data directory in D:\APPS\PostgreSQL\8.4\data. If so, try granting permission on that data directory to the postgres user manually. Just right-click the folder and add full rights for postgres. You should then be able to start the service manually (I think there is a menu item - if not it will be in the services control panel). If that all works, re-run the installer and you should be able to re-install the adminpack etc. over the top of your now working installation. If you didn't find the data directory, create it, grant permissions to postgres and then try a full re-install. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Probably badly timed suggestion: pl/perl calling style same as C style
With plperl.on_init allowing the loading of modules, might there be some merit (and little cost) in allowing the same style of function-mapping as with C functions? CREATE FUNCTION add_one(integer) RETURNS integer AS 'DIRECTORY/funcs', 'add_one' LANGUAGE C STRICT; CREATE FUNCTION add_one(integer) RETURNS integer AS 'My::Package', 'add_one' LANGUAGE plperl STRICT; -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PostgreSQL::PLPerl::Call - Simple interface for calling SQL functions from PostgreSQL PL/Perl
On 17/02/10 18:30, David E. Wheeler wrote: On Feb 17, 2010, at 4:28 AM, Tim Bunce wrote: Umm, perhaps F-funcname(@args), or PG-funcname(@args), or ... ? Anyone got any better suggestions? PG is good. Or maybe DB? It's a module whose only use is embedded in a DB called PG - not sure those carry any extra info. It also treads on the toes of PG-not_a_function should such a beast be needed. I like F-funcname or FN-funcname myself. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PostgreSQL::PLPerl::Call - Simple interface for calling SQL functions from PostgreSQL PL/Perl
On 16/02/10 17:11, David E. Wheeler wrote: On Feb 16, 2010, at 4:08 AM, Tim Bunce wrote: Wouldn't work unless you'd installed an AUTOLOAD function into each schema:: package that you wanted to use. (schema-SP::function_name() could be made to work but that's just too bizzare :) Maybe SP-schema('public')-function_name()? I kind of like the idea of objects created for specific schemas, though (as in your example). Maybe that, too, is something that could be specified in the `use`statement. Or maybe `SP::schema-function`? That's kind of nice, keeps things encapsulated under SP. You could then do the identifier quoting, too. The downside is that, once loaded, the schema package names would be locked down. If I created a new schema in the connection, SP wouldn't know about it. Perhaps it would be better to be explicit about what's going on? SEARCHPATH-function() SCHEMA('public')-function2() Or did SP mean Stored Procedure? On a (kind of) related note, it might be worthwhile to mention search_path in the docs and point out it has the same pros/cons as unix file paths. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PostgreSQL::PLPerl::Call - Simple interface for calling SQL functions from PostgreSQL PL/Perl
On 16/02/10 17:51, David E. Wheeler wrote: On Feb 16, 2010, at 9:43 AM, Richard Huxton wrote: Perhaps it would be better to be explicit about what's going on? SEARCHPATH-function() SCHEMA('public')-function2() Or did SP mean Stored Procedure? Yes. Hmm - might be worth avoiding that in case we get actual transaction-spanning stored procedures at any point. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PostgreSQL::PLPerl::Call - Simple interface for calling SQL functions from PostgreSQL PL/Perl
On 15/02/10 10:32, Tim Bunce wrote: On Mon, Feb 15, 2010 at 07:31:14AM +, Richard Huxton wrote: Is there any value in having a two-stage interface? $seq_fn = get_call('nextval(regclass)'); $foo1 = $seq_fn-($seq1); $foo2 = $seq_fn-($seq2); I don't think there's significant performance value in that. Perhaps it could be useful to be able to pre-curry a call and then pass that code ref around, but you can do that trivially already: $nextval_fn = sub { call('nextval(regclass)', @_) }; $val = $nextval_fn-($seq1); or $nextfoo_fn = sub { call('nextval(regclass)', 'foo_seqn') }; $val = $nextfoo_fn-(); Fair enough. Just wondered whether it was worth putting that on your side of the interface. I'm forced to concede you probably have more experience in database-related APIs than me :-) -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PostgreSQL::PLPerl::Call - Simple interface for calling SQL functions from PostgreSQL PL/Perl
On 12/02/10 23:10, Tim Bunce wrote: There was some discussion a few weeks ago about inter-stored-procedure calling from PL/Perl. I'd greatly appreciate any feedback. Looks great. PostgreSQL::PLPerl::Call - Simple interface for calling SQL functions from PostgreSQL PL/Perl I don't think you show an example with an explicit schema name being used. Can't hurt to make it obvious. $seqn = call('nextval(regclass)', $sequence_name); Is there any value in having a two-stage interface? $seq_fn = get_call('nextval(regclass)'); $foo1 = $seq_fn-($seq1); $foo2 = $seq_fn-($seq2); -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [COMMITTERS] pgsql: Reduce the chatter to the log when starting a standby server.
On 12/02/10 15:37, Fujii Masao wrote: On Sat, Feb 13, 2010 at 12:28 AM, Robert Haasrobertmh...@gmail.com wrote: Well, let's come up with something else then. continuous_recovery ? One problem with the otherwise entirely wonderful HS/SR pairing is the whole business of the config parameters. They feel too bottom-up. Individually, each one makes sense but if you look at them on a page they don't say master/slave replication to me. What about something like: # Primary archive_mode = producer archive_producer_command = 'cp %p .../%f' max_consumers= 5 # Standby archive_mode = producer, consumer archive_producer_command = 'cp %p .../%f' archive_consumer_command = 'cp %p .../%f' consume_from = 'host=... user=...' Three other points that struck me: 1. Why have a separate recovery.conf file rather than just put the commands inline? We can use the include directive to have them in a separate file if required. 2. Why have a finish.replication file, rather than SELECT pg_finish_replication()? -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Avoiding bad prepared-statement plans.
On 09/02/10 12:08, Jeroen Vermeulen wrote: = Projected-cost threshold = [snip - this is the simple bit. Sounds very sensible. ] = Actual-cost threshold = Also stop using the generic plan if the statement takes a long time to run in practice. Do you mean: 1. Rollback the current query and start again 2. Mark the plan as a bad one and plan again next execute If you can figure out how to do #1 then you could probably do it for all queries, but I'm guessing it's far from simple to implement. = Plan refresh = Periodically re-plan prepared statements on EXECUTE. This is also a chance for queries that were being re-planned every time to go back to a generic plan. Presumably some score based on update stats and vacuum activity etc. The good side of all these ideas is good indeed. The bad side is plan instability. Someone somewhere will have a generic plan that turns out better than the specific plan (due to bad stats or config settings or just planner limitations). The question is (I guess): How many more winners will there be than losers? -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Avoiding bad prepared-statement plans.
On 09/02/10 14:25, Jeroen Vermeulen wrote: Richard Huxton wrote: = Actual-cost threshold = Also stop using the generic plan if the statement takes a long time to run in practice. Do you mean: 1. Rollback the current query and start again 2. Mark the plan as a bad one and plan again next execute If you can figure out how to do #1 then you could probably do it for all queries, but I'm guessing it's far from simple to implement. I'm talking about #2. As a matter of fact #1 did come up in one of those discussions, but how do you know you're not killing the query juuust before it'd done, and then maybe executing a different plan that's no better? Ah, you'd need to be smarter when planning and also remember the expected rows from each node. That way if your (index driven) inner node was expecting 3 rows you could mark it to force a cancellation if it returns (say) 30 or more. You'd allow more slack in later processing and less slack earlier on where a bad estimate can explode the final number of rows. Or, there is always the case where we reverse-search an index to find the last 10 messages in a group say, but the particular group in question hasn't had a comment for months, so you trawl half the table. People regularly get bitten by that, and there's not much to be done about it. If we could abort when it looks like we're in worst-case rather than best-case scenarios then it would be one less thing for users to worry about. = Plan refresh = Periodically re-plan prepared statements on EXECUTE. This is also a chance for queries that were being re-planned every time to go back to a generic plan. Presumably some score based on update stats and vacuum activity etc. I was thinking of something very simple: re-do whatever we'd do if the statement were only being prepared at that point. Yes, I thought so, the scoring was for *when* to decide to cancel the old plan. I suppose total query-time would be another way to decide this plan needs reworking. The good side of all these ideas is good indeed. The bad side is plan instability. Someone somewhere will have a generic plan that turns out better than the specific plan (due to bad stats or config settings or just planner limitations). The question is (I guess): How many more winners will there be than losers? That's a good and surprising point, and therefore I'd like to draw attention away to a different point. :-) Yes, there will be losers in the sense that people may have optimized their use of prepared statements to whatever the current planner does. Maybe somebody out there even deliberately uses them to trick the planner into a different plan. But that is always going to happen; we're aiming for better plans, not for giving more detailed control over them. If you really can't take a change, don't upgrade. The competing point is: people out there may currently be forgoing prepared statements entirely because of erratic performance. To those people, if we can help them, it's like having a new feature. Oh, I'm persuaded, but that doesn't really get you anywhere :-) -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] 8.5 vs. 9.0
On 21/01/10 09:37, Dave Page wrote: In an attempt to pre-empt the normally drawn-out discussions about what the next version of PostgreSQL will be numbered. the core team have discussed the issue and following a lenghty debate lasting literally a few minutes decided that the next release shall be Wait for it 9.0. You don't have a code-name. All the cool kids have code-names for their projects. There - that should distract everyone from actual release-related work for the next week or so :-) -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Block-level CRC checks
Bruce Momjian wrote: Tom Lane wrote: The suggestions that were made upthread about moving the hint bits could resolve the second objection, but once you do that you might as well just exclude them from the CRC and eliminate the guessing. OK, crazy idea #3. What if we had a per-page counter of the number of hint bits set --- that way, we would only consider a CRC check failure to be corruption if the count matched the hint bit count on the page. Can I piggy-back on Bruce's crazy idea and ask a stupid question? Why are we writing out the hint bits to disk anyway? Is it really so slow to calculate them on read + cache them that it's worth all this trouble? Are they not also to blame for the write my import data twice feature? -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Block-level CRC checks
Greg Stark wrote: On Tue, Dec 1, 2009 at 9:57 PM, Richard Huxton d...@archonet.com wrote: Why are we writing out the hint bits to disk anyway? Is it really so slow to calculate them on read + cache them that it's worth all this trouble? Are they not also to blame for the write my import data twice feature? It would be interesting to experiment with different strategies. But the results would depend a lot on workloads and I doubt one strategy is best for everyone. It has often been suggested that we could set the hint bits but not dirty the page, so they would never be written out unless some other update hit the page. In most use cases that would probably result in the right thing happening where we avoid half the writes but still stop doing transaction status lookups relatively promptly. The scary thing is that there might be use cases such as static data loaded where the hint bits never get set and every scan of the page has to recheck those statuses until the tuples are frozen. And how scary is that? Assuming we cache the hints... 1. With the page itself, so same lifespan 2. Separately, perhaps with a different (longer) lifespan. Separately would then let you trade complexity for compactness - all of block B is deleted, all of table T is visible. So what is the cost of calculating the hint-bits for a whole block of tuples in one go vs reading that block from actual spinning disk? There does need to be something like the hint bits which does eventually have to be set because we can't keep transaction information around forever. Even if you keep the transaction information all the way back to the last freeze date (up to about 1GB and change I think) then the data has to be written twice, the second time is to freeze the transactions. In the worst case then reading a page requires a random page access (or two) from anywhere in that 1GB+ file for each tuple on the page (whether visible to us or not). While on that topic - I'm assuming freezing requires substantially more effort than updating hint bits? -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] IS DISTINCT FROM vs. ANY
David Fetter wrote: Folks, Shouldn't this work and produce a true? SELECT NULL IS NOT DISTINCT FROM ANY(ARRAY['a',NULL]); ERROR: syntax error at or near ANY LINE 1: SELECT NULL IS NOT DISTINCT FROM ANY(ARRAY['a',NULL]); It should, but probably depends on whether IS NOT DISTINCT should be considered an operator. http://www.postgresql.org/docs/8.4/static/functions-comparisons.html#AEN16561 Got caught by the same thing a couple of days ago. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal - temporal contrib module
Heikki Linnakangas wrote: Scott Bailey wrote: I would like to add a temporal contrib module. I'm very pleased to see people working on temporal issues, BTW! Me too - common use-case and difficult to handle without the right types/operators. Nulls - A common use case for periods is for modeling valid time. Often the end point is not known. For instance, you know when an employee has been hired but the termination time typically wouldn't be known ahead of time. We can either represent these with a null end time or with infinity. But I'm not sure how to deal with them. Obviously we can test for containment and overlap. But what about length or set operations? Hmm. Infinity feels like a better match. The behavior of length and set operations falls out of that naturally. For example, length of a period with an infinite beginning or end is infinite. For set operations, for example the intersection of [123, infinity] and [100, 160] would be [123, 160]. There are cases where one time is genuinely unknown, and there we need a null. For the until further notice scenarios, infinity seems the sensible choice. Where a null is present length is clearly null, and sets I guess should propagate the nulls. [123,null] intersecting [100,160] should be [123,null]. That's assuming we've got a guarantee that from=to for all periods. Temporal Keys - We need two types of temporal keys. A primary key, exclusion type prevents overlap so someone isn't at two places at the same time. You're going to upset a lot of managers if they can't do that ;-) -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Using views for row-level access control is leaky
Heikki Linnakangas wrote: The most useful automatic annotation I can see is to treat functions implementing B-tree operators as safe. I *think* that's safe, anyway. Index lookups and single-type comparisons were the only things I could come up with as safe. Unless there is some way to generate an error from geometric ops (overflow or some such). Anything involving a type-cast can obviously be finessed. If you allow arithmetic then you could trigger an overflow or divide-by-zero error. Hmm - you can probably do something evil with non-UTF8 characters if you allow string operations. Would string comparisons be safe (because a literal would be caught before the view gets evaluated)? -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Using views for row-level access control is leaky
Heikki Linnakangas wrote: CREATE VIEW phone_number AS SELECT person, phone FROM phone_data WHERE phone NOT LIKE '6%'; CREATE OR REPLACE FUNCTION expose_person (person text, phone text) RETURNS bool AS $$ begin RAISE NOTICE 'person: % number: %', person, phone; RETURN true; END; $$ LANGUAGE plpgsql COST 0.01; postgres= SELECT * FROM phone_number WHERE expose_person(person, phone); NOTICE: person: public person number: 12345 NOTICE: person: secret person number: 67890 person | phone ---+--- public person | 12345 Ouch! 1. Change the planner so that conditions (and join!) in the view are always enforced first, before executing any quals from the user-supplied query. Unfortunately that would have a catastrophic effect on performance. I have the horrible feeling that you're going to end up doing this (possibly in conjunction with #4). Once you've executed a user-defined function on a hidden row I think the game is lost. That might even apply to non-trivial expressions too. 2. As an optimization, we could keep the current behavior if the user has access to all the underlying tables anyway, but that's nontrivial because permission checks are supposed to be executed at runtime, not plan time. 3. Label every function as safe or unsafe, depending on whether it can leak information about the arguments. Classifying functions correctly can be a bit tricky; e.g functions that throw an error on some input values could be exploited. [snip] I'm sure there's a way to generate an error on-demand for rows with specific numbers. That opens you up to fishing for hidden rows. It might be possible to label a subset of operators etc as safe. I'd guess that would exclude any casts in it, and perhaps CASE. Hmm - you could probably generate a divide-by-zero or overflow error or some such for any targetted numeric value though. 4. Make the behavior user-controllable, something along the lines of CREATE RESTRICTED VIEW ..., to avoid the performance impact when views are not used for access control. Not pretty, but solves the problem. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Using views for row-level access control is leaky
Pavel Stehule wrote: What version do you have? I am cannot repeat it. It will depend on the relative cost of the clauses (though 0.0001 should have been enough to force it). Try: CREATE OR REPLACE FUNCTION row_hidden (phone text) RETURNS bool AS $$ BEGIN RETURN phone LIKE '6%'; END; $$ LANGUAGE plpgsql COST 999; CREATE VIEW phone_number AS SELECT person, phone FROM phone_data WHERE NOT row_hidden(phone); -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Using views for row-level access control is leaky
Pavel Stehule wrote: 2009/10/22 Heikki Linnakangas heikki.linnakan...@enterprisedb.com: That example I ran on CVS HEAD, but it's a generic problem on all versions. postgres=# select version(); version PostgreSQL 8.5devel on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.4.1 20090725 (1 row) postgres=# select * from x; a │ b ┼ 10 │ 20 (1 row) postgres=# create view v as select * from x where b 20; ^^^ This is the expression that needs to be expensive. Then the exposing function needs to be cheap. That makes the planner run the exposing function first. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Using views for row-level access control is leaky
Richard Huxton wrote: Heikki Linnakangas wrote: CREATE VIEW phone_number AS SELECT person, phone FROM phone_data WHERE phone NOT LIKE '6%'; CREATE OR REPLACE FUNCTION expose_person (person text, phone text) RETURNS bool AS $$ begin RAISE NOTICE 'person: % number: %', person, phone; RETURN true; END; $$ LANGUAGE plpgsql COST 0.01; postgres= SELECT * FROM phone_number WHERE expose_person(person, phone); NOTICE: person: public person number: 12345 NOTICE: person: secret person number: 67890 person | phone ---+--- public person | 12345 Hmm - just using SQL (but with an expensive view filtering function): SELECT * FROM phone_number WHERE (CASE WHEN phone = '67890' THEN person::int ELSE 2 END)=2; ERROR: invalid input syntax for integer: secret person You could get a related problem where a view exposes a text column full of valid dates which the user then tries to cast to date. If the underlying table contains non-dates you could still get an error. Arguably the view should have handled the cast in this case though. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Using views for row-level access control is leaky
Pavel Stehule wrote: postgres=# create or replace function vv(int, int) returns bool as $$begin raise notice '% %', $1, $2; return true; end$$ language plpgsql COST 0.01; CREATE FUNCTION postgres=# select * from v where vv(a,b);NOTICE: 10 20 a │ b ───┼─── (0 rows) still I have not bad result, but, yes, I see what I could not to see. Ah - that's the problem. It's not possible to get the hidden values into the result set, but it is possible to see them. It only matters if you are using the view to prevent access to certain rows. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] navigation menu for documents
Peter Eisentraut wrote: On Fri, 2009-07-17 at 13:58 +0100, Richard Huxton wrote: 2. Titles on navigation links. Run ./STYLING/title_links.pl and it should add title attributes to the navigation links. This means hovering over the top links gives the title of the page they will go to. Presumably we could do this directly from the sgml source, and I think it's probably worthwhile. I have updated the stylesheet to add a title attribute to the header links. That has about the same effect as your script. Ah, good. My script was only ever intended to demonstrate. That's one item we can tick off. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] navigation menu for documents
David E. Wheeler wrote: On Sep 29, 2009, at 8:55 AM, Richard Huxton wrote: For the browser, does the following match what you're after, Andrew? - clicking chapter title opens the browser panel - panel stays open until you click close icon - panel contains collapsable tree of chapter/section headings Alternatively, could just auto-open the browser panel if javascript is enabled and window is wider than N pixels. Why wouldn't the entire TOC be in a collapsed list? Permanently on-screen? My only concern there would be for people viewing on phones etc. In addition we'll presumably want to meet: - no external js libraries (or do we care, if we just reference it from google?) Save yourself the hassle and just bundle jQuery. That's what I've done for Pod::Site (module that builds the Bricolage API browser). It's MIT licensed (well MIT+GPL) which is BSD compatible, but I don't know if that's acceptable. It would be easier for me if it could be bundled and presumably make it easier for other contributors in the future too. - navigation is optional, disabling js leaves docs as at present As long as there's a way to get the nav back from a link on each doc page. - works on all reasonable browsers (anything not IE6) +1 (IE6--) - works online and in downloaded docs (except Windows .chm of course) That'd be nice, too. Offline is crucial as far as I'm concerned. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] navigation menu for documents
David E. Wheeler wrote: On Oct 1, 2009, at 1:12 AM, Richard Huxton wrote: Why wouldn't the entire TOC be in a collapsed list? Permanently on-screen? My only concern there would be for people viewing on phones etc. I have to admit that I'm never looking at the Pg docs on my iPhone. This is mainly because I use them as a reference while hacking, and I'm not (yet) hacking PostgreSQL on my phone. Ah, I _do_ look at them on my Nokia N810 when I'm on the train etc. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] navigation menu for documents
Andrew Dunstan wrote: Alvaro Herrera wrote: Did this go anywhere? Well, it was sorta nice but what Richard sent wasn't really what I want, at least, which is more along the lines of the menu David Wheeler uses for the Bricolage API docs. Well, if we nail down the details I'm happy to do the grunt-work. I think #1, #2 (fixed navigation links, titles) are a separate question and just need to be decided upon by those who feel strongly one way or t'other. For the browser, does the following match what you're after, Andrew? - clicking chapter title opens the browser panel - panel stays open until you click close icon - panel contains collapsable tree of chapter/section headings Alternatively, could just auto-open the browser panel if javascript is enabled and window is wider than N pixels. In addition we'll presumably want to meet: - no external js libraries (or do we care, if we just reference it from google?) - navigation is optional, disabling js leaves docs as at present - works on all reasonable browsers (anything not IE6) - works online and in downloaded docs (except Windows .chm of course) -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] GRANT ON ALL IN schema
decibel wrote: In this specific case, I think there's enough demand to warrant a built-in mechanism for granting, but if something like exec() is built-in then the bar isn't as high for what the built-in GRANT mechanism needs to handle. CREATE OR REPLACE FUNCTION tools.exec( sql text , echo boolean ) RETURNS text LANGUAGE plpgsql AS $exec$ Perhaps another two functions too: list_all(objtype, schema_pattern, name_pattern) exec_for(objtype, schema_pattern, name_pattern, sql_with_markers) Obviously the third is a simple wrapper around the first two. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] When is a record NULL?
David E. Wheeler wrote: On Jul 23, 2009, at 9:34 PM, Brendan Jurd wrote: I guess the spec authors figured they might as well make IS [NOT] NULL do something useful when applied to a row rather than throwing an error. I tend to agree. Frankly, I find the state where a record with a NULL and a not-null value being neither NULL nor not NULL bizarre. I'm guessing the justification (and presumably this was worked out based on the behaviour of one or more of the big DB providers and then justified afterwards) is that the composite is partially unknown. Of course you should either introduce a new code or throw an error, but four-valued logic isn't going to win you any friends. If the argument *is* that because you know part of the overall value the composite isn't null then I'd argue that ('abc' || null) isn't null either. After all, the first three characters are perfectly well established. I hope that provides some clarity. It's useful to learn that `ROW(NULL, NULL)` is NULL, but I find the whole thing totally bizarre. Is it me? Yes, just you. None of the rest of us have any problems with this at all :-) -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions User Design
Peter Eisentraut wrote: Instead of installing an extension, that is, say, a collection of types and functions provided by a third-party source, I would like to have a mechanism to deploy my own actual database application code. On the matter of schemas, I suggest that we consider two ideas that have helped RPM in its early days, when everyone had their own very specific ideas about what should be installed where: - file system hierarchy standard - relocations Of course if you have IMPORT from an extension, it's down to the DBA: INSTALL chinese_calendar; IMPORT FROM chinese_calendar SECTION (default) INTO SCHEMA pg_extension; IMPORT FROM chinese_calendar SECTION (year_names) INTO SCHEMA lookups; INSTALL peter_e_app; IMPORT FROM peter_e_app SECTION (all) INTO SCHEMA public; Of course this means two things: 1. Every extension has to have its own schema mappings. 2. The application view of the database is a sort of default extension Pros: - Namespace collisions begone! - Anything to help extension upgrades could be re-used for applications (and vice-versa) - Some stuff isn't visible outside the extension *at all* - You can separate extension installation from usage (good for multi-user setups). Cons: - Extra layer of indirection (find my namespace = namespace lookup = object) - Extensions need to list what they export in what sections - More code required -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] navigation menu for documents
Andrew Dunstan wrote: Peter Eisentraut wrote: This looks very cool, but should probably be implemented via a stylesheet change instead of some Perl parsing some HTML. :-) I'm not sure if this actually addresses Andrew's original concern, though. No, it doesn't. David Wheeler's navigation (see upthread) that he uses for the Bricolage docs does, however. Ah, if you can change the overall layout then the world is your shellfish of choice. Would it be possible to include jquery? It's GPL/MIT dual-licence. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] OT: Testing - please ignore
-- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] navigation menu for documents
OK, if you untar the attached in the docs dir there are a three separate sets of changes in it. It all functions, but consider it a discussion point rather than a patch. Presumably we'd need to discuss a patch over on the docs mailing-list. 1. Fixed navigation Copy STYLING/stylesheet.css over the existing one and you will have static navigation links top and bottom of the page. 2. Titles on navigation links. Run ./STYLING/title_links.pl and it should add title attributes to the navigation links. This means hovering over the top links gives the title of the page they will go to. Presumably we could do this directly from the sgml source, and I think it's probably worthwhile. With 1+2 I think there's an argument in favour of removing the bottom navigation - it's only useful if you can't see the top links. 3. Javascript popup menu. This uses jquery, but that's just for convenience during discussion. You could rework this without it. Copy STYLING/*.js and STYLING/menu.inc to the docs dir and then run ./STYLING/include_javascript.pl to include the popup script. The central chapter heading section of the top navigation area should now be a link that toggles the menu on/off. The menu could be as simple/complex as you like - this is just what I hacked together by parsing the TOC on index.html I've tested it on Firefox, Opera, IE7 and Safari. Realistically, the only real problem platforms will be IE6 and perhaps iphones. -- Richard Huxton Archonet Ltd STYLING.tgz Description: application/compressed-tar -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] navigation menu for documents
Andrew Dunstan wrote: Yes, really. What you suggest here is just not adequate, IMNSHO. I don't want to have to scroll to the top or bottom of the page to get navigation, and I want to be able to see the navigation and go where I want directly. Are you talking about the online manuals, or something else here? -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] navigation menu for documents
Andrew Dunstan wrote: Richard Huxton wrote: Andrew Dunstan wrote: Yes, really. What you suggest here is just not adequate, IMNSHO. I don't want to have to scroll to the top or bottom of the page to get navigation, and I want to be able to see the navigation and go where I want directly. Are you talking about the online manuals, or something else here? I don't care if we don't provide this for the online manuals on postgresql.org - I'm quite happy to install it on my own server if necessary. But I am talking about the HTML docs that come from our /doc directory. And I bet if we had the option of better navigation, our online users would want us to provide it. Shouldn't be too hard to come up with something reasonable with a little css. Something only activated if javascript is turned on or some such. Give me 48 hours and I'll have a play. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] 8.5 development schedule
Kevin Grittner wrote: Tom Lane t...@sss.pgh.pa.us wrote: I think we used to do it more or less like that, but people didn't like it because they couldn't do any long-range planning. Well, obviously the 8.4 release cycle did little to help them. As has already been observed, there is a crying need to say no at some point to get a release out. It might actually help to do that on big patches if we don't let too many tiny ones accumulate. I seem to remember the argument being tossed about that we might as well keep working on this one because there's all these others to wrap up. Have you chaps considered a simple points system? Every patch would need five minutes attention to triage it into one of: small (1 point), medium (2), large (10), huge (50 points - Sync Repl etc). First CF gets (say) 200 points, next 150, next 100, next 75. First-come, first-served - if your patch goes over the limit it goes in the next commit-fest. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions User Design
Peter Eisentraut wrote: Another thing we might want to consider once we have a robust extension mechanism is to move some things out of the backend into extensions. Candidates could be uuid, legacy geometry types, inet/cidr, for example. These extensions would still be available and probably installed by default, but they need not be hardcoded into the backend. Presumably would help the prospective upgrader too. Upgrade tool can't cope with the change to inet types? No problem, I *know* they're not in use, since they're not loaded. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions User Design
David E. Wheeler wrote: On Jun 23, 2009, at 3:02 PM, Dimitri Fontaine wrote: It's just PostgreSQL reading an SQL file (foo.install.sql) and parsing each statement etc, so we obviously have the machinery to recognize SQL objects names and schema qualification. Replacing the schema on-the-fly should be a SMOP? (*cough*) Well, no. I might have written a function in PL/Perl. Is PostgreSQL going to parse my Perl function for unqualified function calls? Really? Hell, I don't think that PL/pgSQL is parsed until functions are loaded, either, though I may be wrong about that. Better is to have some magic so that functions in an extension magically have their schema put onto the front of search_path when they're called. Or when they're compiled. Or something. With the given example of extension foo depending on bar and baz, I'd suggest: - Default search_path = ext:self, pg_catalog - ext:self = wherever foo installs - ext:bar = wherever bar installs - ext:baz = wherever baz installs You *can't* have anything other than the current package in the search-path in case bar/baz have conflicting objects. I've no idea if ext:name makes sense from a parser point of view, but the idea is to map extension name to a schema. If possible, this should work anywhere in PG that a schema can be specified. So - If extension foo is installed in schema1 then ext:foo.fn1() is the same as schema1.fn1() -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Multicolumn index corruption on 8.4 beta 2
Floris Bos / Maxnet wrote: I am having the problem that some queries are unable to find rows when using the index. When I force a sequential scan, by doing set enable_indexscan=false; set enable_bitmapscan=false;, the same queries work fine. Not a hacker myself, but I can tell you that the first question you'll be asked is can you produce a test case? If you can generate the problem from a test table+generated data that will let people figure out the problem for you. If not, details of the table schema will be needed, and is there any pattern to the missed rows? Also - compile settings, character set and locale details might be relevant too. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Service not starting: Error 1053
Magnus Hagander wrote: Heikki Linnakangas wrote: Of course, none of this helps if the culprit is a DLL or a 3rd party program that allocates the adress space immediately at CreateProcess. AFAIK all the cases where we *have* identified the culprit (which has been antivirus or firewall), this is exactly what it was doing... Would it be possible to build a tool that runs through a series of permission-checks, tries to grab some shared-memory, write to files in the appropriate folders etc. and then shows the name of any process interfering? Half the problem is that whenever someone has Windows-related difficulties there's no standard tools we can use to diagnose. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Service not starting: Error 1053
Frank Featherlight wrote: Hey guys, I had two running threads here: http://archives.postgresql.org/pgsql-general/2009-02/msg00859.php http://www.postgresqlforums.com/forums/viewtopic.php?f=41t=1574 Both have not come to a succesful conclusion. In very short (but you better read the threads): I was trying to help Frank out on the -general thread and we've ruled out antivirus etc. (complete uninstall) and my guess is that it's a permission issue. Not enough of a Windows guy to know *which* permission might be causing this though. FATAL: could not reattach to shared memory (key=1804, addr=0170): 487 -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [GENERAL] Fwd: Need help in porting Oracle PL/SQL's OUT paramater based procedures
Gurjeet Singh wrote: that is, not passing anything for the OUT or INOUT parameters. This works fine for a simple SELECT usage, but does not play well when this function is to be called from another function, (and assuming that it'd break the application code too, which uses Oracle syntax of calling functions)! I have a simple function f() which I'd like to be ported in such a way that it works when called from other plpgsql code, as well as when the application uses the Oracle like syntax. Here's a sample usage of the function f() in Oracle: If you really want Oracle-compatible functions I think there's a company that might sell you a solution :-) However, failing that you'll want an example of OUT parameters in PostgreSQL code - see below. The main thing to remember is that the OUT is really just a shortcut way of defining a record type that gets returned. It's nothing like passing by reference in insert real programming language here. BEGIN; CREATE OR REPLACE FUNCTION f1(IN a integer, INOUT b integer, OUT c integer) RETURNS RECORD AS $$ BEGIN c := a + b; b := b + 1; -- No values in RETURN RETURN; END; $$ LANGUAGE plpgsql; CREATE OR REPLACE FUNCTION f2() RETURNS boolean AS $$ DECLARE a integer := 1; b integer := 2; c integer := -1; r RECORD; BEGIN r := f1(a, b); -- Original variables unaffected RAISE NOTICE 'a=%, b=%, c=%', a,b,c; -- OUT params are here instead RAISE NOTICE 'r.b=%, r.c=%', r.b, r.c; -- This works, though notice we treat the function as a row-source SELECT (f1(a,b)).* INTO b,c; RAISE NOTICE 'a=%, b=%, c=%', a,b,c; RETURN true; END; $$ LANGUAGE plpgsql; SELECT f2(); ROLLBACK; -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] 8.4 release planning
Greg Smith wrote: Where I suspect this is all is going to settle down into is that if 1) the SE GUC is on and 2) one of the tables in a join has rows filtered, then you can expect that a) it's possible that the result will leak information, which certainly need to be documented, As far as I can tell this is the case however you hide the information. If you implemented it with views you'll have the same issue. If you hide the existence of project p_id=TOPSECRET01 and people can run inserts then they can spot it. Likewise, it you have fkey references to the row then deletions can be used to spot it. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Improving compressibility of WAL files
Tom Lane wrote: Kevin Grittner kevin.gritt...@wicourts.gov writes: Greg Smith gsm...@gregsmith.com wrote: I thought at one point that the direction this was going toward was to provide the size of the WAL file as a parameter you can use in the archive_command: Hard to beat for performance. I thought there was some technical snag. Yeah: the archiver process doesn't have that information available. Am I being really dim here - why isn't the first record in the WAL file a fixed-length record containing e.g. txid_start, time_start, txid_end, time_end, length? Write it once when you start using the file and once when it's finished. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Improving compressibility of WAL files
Aidan Van Dyk wrote: * Richard Huxton d...@archonet.com [090109 12:22]: Yeah: the archiver process doesn't have that information available. Am I being really dim here - why isn't the first record in the WAL file a fixed-length record containing e.g. txid_start, time_start, txid_end, time_end, length? Write it once when you start using the file and once when it's finished. It would break the WAL write-block/sync-block forward only progress of the xlog, which avoids the whole torn-page problem that the heap has. I thought that only applied when the filesystem page-size was less than the data we were writing? -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Enhancement to pg_dump
Rob Kirkbride wrote: I've introduced a --delete-not-drop option which simply does a DELETE FROM % rather than 'DROP and then CREATE'. Beware foreign-keys slowing you - TRUNCATE all relevant tables should be the fastest method if possible. I hope this sounds sensible and I haven't missed something - I'm still learning! Have you considered restoring to a completely different database (report1/report2) and just switching between them? -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Simple postgresql.conf wizard
Josh Berkus wrote: Greg, BTW, I think this is still in enough flux that we really ought to make it a pgfoundry project. I don't think we'll have anything ready for 8.4 contrib. [Been trying to find the right point to post this reply.] Is it only me that thinks this should be a service on the website too (or even first)? Fill in web form, click button, get sample postgresql.conf (with comments) back. Add a tick-box asking if we can keep a copy of their answers and you might get some useful usage info too. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [GENERAL] Very slow queries w/ NOT IN preparation (seems like a bug, test case)
Tom Lane wrote: Sergey Konoplev [EMAIL PROTECTED] writes: You are right. I've found the odd thing (that completely drives me mad) in postgresql.conf. You are able to reproduce slow-not-in queries by switching constraint_exclusion to on in your postgresql.conf and running my test (which is attached to the first message). Hmph. It's trying to see if the NOT IN condition is self-contradictory, which of course it isn't, but the predicate_refuted_by machinery isn't smart enough to determine that except by running through all N^2 combinations of the individual x const conditions :-(. So it's not checking the table, it's looking to see whether clause1 OR clause2 end up excluding each other? Presumably becuase OR is just another operator? We could respond to this in a number of ways: 1. Tough, don't do that. 2. Put some arbitrary limit on the number of subconditions in an AND or OR clause before we give up and don't attempt to prove anything about it. Do we know the estimated cost of just executing the planner-node at this point? You could scale with the cost of actually doing the tests. 3. Put in a narrow hack that will get us out of this specific case, but might still allow very slow proof attempts in other large cases. The specific narrow hack I'm considering for #3 goes like this: The specific hack goes right over my head :-) -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Cross-column statistics revisited
Gregory Stark wrote: They're certainly very much not independent variables. There are lots of ways of measuring how much dependence there is between them. I don't know enough about the math to know if your maps are equivalent to any of them. I think dependency captures the way I think about it rather than correlation (although I can see there must be function that could map that dependency onto how we think of correlations). In any case as I described it's not enough information to know that the two data sets are heavily dependent. You need to know for which pairs (or ntuples) that dependency results in a higher density and for which it results in lower density and how much higher or lower. That seems like a lot of information to encode (and a lot to find in the sample). Like Josh Berkus mentioned a few points back, it's the handful of plan-changing values you're looking for. So, it seems like we've got: 1. Implied dependencies: zip-code=city 2. Implied+constraint: start-date end-date and the difference between the two is usually less than a week 3. Top-heavy foreign-key stats. #1 and #2 obviously need new infrastructure. From a non-dev point of view it looks like #3 could use the existing stats on each side of the join. I'm not sure whether you could do anything meaningful for joins that don't explicitly specify one side of the join though. Perhaps just knowing whether that there's a dependence between two data sets might be somewhat useful if the planner kept a confidence value for all its estimates. It would know to have a lower confidence value for estimates coming from highly dependent clauses? It wouldn't be very easy for the planner to distinguish safe plans for low confidence estimates and risky plans which might blow up if the estimates are wrong though. And of course that's a lot less interesting than just getting better estimates :) If we could abort a plan and restart then we could just try the quick-but-risky plan and if we reach 50 rows rather than the expected 10 try a different approach. That way we'd not need to gather stats, just react to the situation in individual queries. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Transaction Snapshots and Hot Standby
Gregory Stark wrote: In that case the problem is dealing with different usage patterns on different tables. There might be a way to solve just that use case such as deferring WAL records for those tables. That doesn't guarantee inter-table data consistency if there were other queries which read from those tables and updated other tables based on that data though. Perhaps there's a solution for that too though. There was a suggestion (Simon - from you?) of a transaction voluntarily restricting itself to a set of tables. That would obviously reduce the impact of all the options where the accessed tables weren't being updated (where update = vacuum + HOT if I've got this straight). -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Transaction Snapshots and Hot Standby
Heikki Linnakangas wrote: Simon Riggs wrote: Taking snapshots from primary has a few disadvantages ... * snapshots on primary prevent row removal (but this was also an advantage of this technique!) That makes it an awful solution for high availability. A backend hung in transaction-in-progress state in the slave will prevent row removal on the master. Isolating the master from queries done performed in the slave is exactly the reason why people use hot standby. And running long reporting queries in the standby is again a very typical use case. I have to say I agree with Heikki here. Blocking the master based on what the slave is doing seems to make host standby less useful than warm. I like the idea of acquiring snapshots locally in the slave much more. It's the option that I can see people (well, me) understanding the easiest. All the others sound like ways to get things wrong. As for inconsistent query-results - that way madness lies. How on earth will anyone be able to diagnose or report bugs when they occur? As you mentioned, the options there are to defer applying WAL, or cancel queries. I think both options need the same ability to detect when you're about to remove a tuple that's still visible to some snapshot, just the action is different. We should probably provide a GUC to control which you want. I think there's only one value here: hot standby wal delay time before cancelling query. Might be a shorter name. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Visibility Groups
Simon Riggs wrote: On Thu, 2008-08-07 at 12:55 +0200, Jochem van Dieten wrote: On Thu, Aug 7, 2008 at 12:38 PM, Simon Riggs wrote: I propose creating Visibility Groups that *explicitly* limit the ability of a transaction to access data outside its visibility group(s). Doesn't every transaction need to access data from the catalogs? Wouldn't the inclusion of a catalogs visibility group in every transaction negate any potential benefits? True, but I don't see the catalogs as frequently updated data. The objective is to isolate frequently updated tables from long running statements that don't need to access them. Tables can be in multiple visibility groups, perhaps that wasn't clear. When we seek to vacuum a table, we take the lowest xmin of any group it was in when we took snapshot. I'm not sure if visibility group is the best name for this - I had to go away and think through what you meant about that last bit. Have I got this right? So - a visibility group is attached to a transaction. My long-running transaction T0 can restrict itself to catalogues and table event_log. Various other transactions T1..Tn make no promises about what they are going to access. They all share the null visibility group. A table user_emails is in the null visibility group and can be vacuumed based on whatever the lowest xid of T1..Tn is. Table event_log is in both groups and can only be vacuumed based on T0..Tn (presumably T0 is the oldest, since that's the point of the exercise). An attempt to write to user_emails by T0 will fail with an error. An attempt to read from user_emails by T0 will be allowed? What happens if I'm in ISOLATION LEVEL SERIALIZABLE? Presumably the read is disallowed then too? -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Visibility Groups
Simon Riggs wrote: On Thu, 2008-08-07 at 14:18 +0100, Richard Huxton wrote: An attempt to write to user_emails by T0 will fail with an error. All above correct The point of doing this is that *if* T0 becomes the oldest transaction it will *not* interfere with removal of rows on user_emails. An attempt to read from user_emails by T0 will be allowed? No, reads must also be excluded otherwise MVCC will be violated. Ah good - I was wondering, but I read your first email as allowing reads. What happens if I'm in ISOLATION LEVEL SERIALIZABLE? Presumably the read is disallowed then too? No, that's not relevant. That is your choice about how often you update your snapshot of the database. The visibility group refers to the *scope* of the snapshot, so the two things are orthogonal. So - effectively we're partitioning the database into (possibly overlapping) subsets of tables. Would it simplify things at all to have a centrally-defined list of visibility scopes (or groups) which your transaction/user can access? As a DBA, I'd rather have somewhere central to manage this, and I'd probably make it per-user anyway. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Avoiding Application Re-test
Magnus Hagander wrote: Simon Riggs wrote: Not foolproof, but still worth it. This would allow many users to upgrade to 8.4 for new features, yet without changing apps. Won't there normally be a number of changes that *cannot* be covered by such a parameter, without a whole lot more work in the patch? Slightly OT, but just so it's on the record. I'm of the opinion that whatever packaging system eventually makes it's way into PG it needs to apply to stuff that's in core at the moment. That way I can just unload network_addr_types and I no longer have to worry whether applications might be affected by an incompatible change. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Reliability of CURRVAL in a RULE
Nick wrote: Is the use of CURRVAL in this example reliable in heavy use? Nick - the hackers list is for people interested in working on the code-base of PostgreSQL itself. This would have been better on the general or sql lists. CREATE RULE add_email AS ON INSERT TO users WHERE (NEW.email IS NULL) DO INSERT INTO users_with_email (id) VALUES (CURRVAL('users_id_seq')); Short answer no. Rules are like macros and you can end up with unexpected multiple evaluations and strange order of execution. See the mailing list archives for details and try inserting multiple users in one go to see an example of a problem. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Security and Data Protection Issues
Stuart Gundry wrote: Thank you, I'm also curious as to whether the data folder is already in some way encrypted and if so, what encryption/obfuscation is being used. There doesn't seem to be anything about this on the web. No encryption, although large text fields may be compressed (read up on TOAST) so not readable as plain-text. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: Does anything dump per-database config settings? (was Re: [HACKERS] ALTER DATABASE vs pg_dump)
Tom Lane wrote: Richard Huxton [EMAIL PROTECTED] writes: Tom Lane wrote: So put forward a worked-out proposal for some other behavior. IMHO the time a dump/restore should be issuing ALTER...SET on a database is when it has issued the corresponding CREATE DATABASE. So pg_dump would produce this info when, and only when, you'd used --create? I agree that it seems sensible in that case, I'm just wondering if that will cover all the use-cases. Well, in the -Fc case you'd produce it always and pg_restore would only emit it when you --create. The only time we need to restore per-database settings is if the database has been dropped. If you're not having the dump/restore re-create the database then presumably you've taken charge of the per-database settings. This would mean duplicating some functionality between pg_dump and pg_dumpall ... or maybe we could move all that logic over to pg_dump and have pg_dumpall use --create when invoking pg_dump. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: Does anything dump per-database config settings? (was Re: [HACKERS] ALTER DATABASE vs pg_dump)
Robert Treat wrote: On Tuesday 01 July 2008 03:45:44 Richard Huxton wrote: The only time we need to restore per-database settings is if the database has been dropped. If you're not having the dump/restore re-create the database then presumably you've taken charge of the per-database settings. I'm not sure I agree with that entirely. For example, one common usage scenario when upgrading between major versions is to create the database, load contrib modules (whose C functions or similar may have changed), and then load the dump into the database. In those case you still might want the database settings to be dumped, even though you are creating the database manually. (Now, one might argue that you could still dump with --create and ignore the error of the database creation command, but that probably isn't ideal). Well, with -Fc I'd expect it to be dumped all the time and pg_restore would selectively restore it. That should mean it has its own line in the pg_restore --list output which would let you just comment out the database-creation but leave the ALTER...SET in. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: Does anything dump per-database config settings? (was Re: [HACKERS] ALTER DATABASE vs pg_dump)
Tom Lane wrote: So put forward a worked-out proposal for some other behavior. OK My first thought is that the -c and -C options create a lot of the issues in this area. -c in particular is evidently meant for merging a dump into a database that already contains unrelated objects. (In fact you could argue that the *default* behavior is meant for this, -c just changes the result for conflicts.) It seems unlikely that having pg_dump issue ALTER DATABASE SET commands is a good idea in all of these scenarios. Can't comment on --clean since I don't use it. I've always assumed it's for the case where you don't have a user with permissions to drop/recreate a database (e.g. web hosting). IMHO the time a dump/restore should be issuing ALTER...SET on a database is when it has issued the corresponding CREATE DATABASE. If you want to tweak this sort of thing, just manually create the database with whatever options you want and don't use --create. I'm also wondering why it'd be bright to treat ALTER ... SET properties different from, say, database owner and encoding properties. Not sure what you mean here. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] ALTER DATABASE vs pg_dump
Is it desirable that pg_dump doesn't dump config settings set via ALTER DATABASE? http://archives.postgresql.org/pgsql-novice/2008-04/msg00016.php I just got bitten by a DateStyle not being restored on my test DB (I usually set it client-side in the app). I could see someone without my steel trap of a mind letting something like this slip through. Obvious problem settings would be: datestyle, locale, default-text-search Is this a deliberate behaviour of pg_dump or just an unscratched itch? -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] ALTER DATABASE vs pg_dump
Tom Lane wrote: Richard Huxton [EMAIL PROTECTED] writes: Is it desirable that pg_dump doesn't dump config settings set via ALTER DATABASE? Well, it's intentional anyway: that's handled by pg_dumpall. The basic design is that anything that can be seen from outside a specific database is handled on the pg_dumpall side. Well, global settings and per-user settings are clearly global. I'm not sure that per-database settings are logically global, although I'll accept that's how they're stored. At present it means you can't reliably do: DROP DATABASE foo; pg_restore --create foo.dump I'd then have to either hand edit the dumpall dump or wade through a bunch of errors checking that none of them were relevant. I just got bitten by a DateStyle not being restored on my test DB You could also get bitten by not having restored users or tablespaces that the dump depends on, so I'm not sure there's a strong argument here for refactoring the responsibility. Yep, but that will give you a no such role error when you try to restore. This is a situation where you can restore without errors and end up with different behaviour: dd/mm/ vs mm/dd/ or text-search stop-words changing. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Does anything dump per-database config settings? (was Re: [HACKERS] ALTER DATABASE vs pg_dump)
Richard Huxton wrote: At present it means you can't reliably do: DROP DATABASE foo; pg_restore --create foo.dump I'd then have to either hand edit the dumpall dump or wade through a bunch of errors checking that none of them were relevant. Actually, I'm not sure pg_dumpall does them either. tracker= SELECT name,setting,source FROM pg_settings WHERE name = 'DateStyle'; name| setting | source ---+--+-- DateStyle | SQL, DMY | database (1 row) pg_dumpall -U postgres -p 5483 -g tracker.global.schema pg_dump -U postgres -p 5483 --schema-only tracker.schema grep -i datestyle tracker*schema nothing That's with 8.3.3 Am I doing something stupid here? -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: Does anything dump per-database config settings? (was Re: [HACKERS] ALTER DATABASE vs pg_dump)
Richard Huxton wrote: Richard Huxton wrote: At present it means you can't reliably do: DROP DATABASE foo; pg_restore --create foo.dump I'd then have to either hand edit the dumpall dump or wade through a bunch of errors checking that none of them were relevant. Actually, I'm not sure pg_dumpall does them either. [snip] Am I doing something stupid here? OK - so to get the ALTER DATABASE commands I need to dump the schema for the entire cluster. Is that really desired behaviour? -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_dump restore time and Foreign Keys
Simon Riggs wrote: If we had a way of pg_dump passing on the information that the test already passes, we would be able to skip the checks. Proposal: * Introduce a new mode for ALTER TABLE ADD FOREIGN KEY [WITHOUT CHECK]; * Have pg_dump write the new syntax into its dumps, when both the source and target table are dumped in same I've been known to manually tweak dumps before now. I can see me forgetting this. What about pg_dump writing out a row-count and MD5 of the rows in the COPY (just a textual calculation). Iff the restore checksum matches the dump checksum for both tables then the foreign-keys can be skipped. If the restore checksum doesn't match the dump then it can issue a warning, but continue and run the full fkey check. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] May Commitfest is done!
Tom Lane wrote: Per discussion of various hackers at PGCon, the May commitfest is finished. The remaining patches on the queue are Heikki's map-forks patch, which was WIP not intended to be committed now; and Merlin and Andrew's libpq hooks patch, which is still in flux and not ready to commit now. So ... back to your regularly scheduled development. Is there a tag in the CVS to mark this point, or better still a tarball that people like me can check out and play with over the next month or two? -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] alter + preserving dependencies
Dimitri Fontaine wrote: Le 7 mai 08 à 07:52, Tom Lane a écrit : Dimitri Fontaine [EMAIL PROTECTED] writes: Could we consider ALTER VIEW ALTER COLUMN ... SET DEFAULT ...;? We could if we hadn't already done it five or so years ago. Or am I missing what you need here? My 8.3.1 installation psql \h only gives me: Syntax: ALTER VIEW name RENAME TO newname Ah, you use ALTER TABLE: ALTER TABLE my_view ALTER COLUMN view_column DEFAULT expr; -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [GENERAL] I think this is a BUG?
Kaloyan Iliev wrote: Hi, I find something very interesting which I think is a bug and I want to discuss it. --- Here is the example1: 1.I create a table without PK; 2.Insert 1 row; 3.I ADD PK; 4.When I select all ID's are with NULL values, but the column is NOT NULL; 5.But If I try to create a regular NOT NULL column the postgres stops me(as it should) with ERROR ERROR: column id contains null values. PostgreSQL 8.2.7 on amd64-portbld-freebsd6.3, compiled by GCC cc (GCC) 3.4.6 [FreeBSD] 20060305 r=# CREATE TABLE test( a text, b int); CREATE TABLE r=# INSERT INTO test VALUES ('test',1); INSERT 0 1 r=# ALTER TABLE test ADD COLUMN id INT NOT NULL PRIMARY KEY; NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index test_pkey for table test ALTER TABLE r=# SELECT * FROM test WHERE id is null; a | b | id --+---+ test | 1 | Well that's clearly broken (seems to do the same in 8.3 too). I've cc-ed the hackers list so they can investigate further. Presumably the not null test is being missed somehow when the column is initially created. r=# ALTER TABLE test ADD COLUMN not_null int NOT NULL; ERROR: column not_null contains null values My question is why didn't PG create the sequence and fill the values in the first example. Not sure what you mean here. And why creates an NOT NULL column with null values in it! Because it hasn't got any other value to put in it. Try: ALTER TABLE test ADD COLUMN id3 integer NOT NULL default 0; -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Problem with site doc search
Bruce Momjian wrote: Magnus Hagander wrote: I didn't do anything, but possibly it got fixed by a different upgrade at some point, and the recrawling of the sites. Oops, maybe it isn't fixed. I tried pg_standby and it seemed to work but pg and standby returns the same results. Is that correct? How do I test this? The default is to split words on underscore, so it's probably doing what it always did. Try to_tsquery and you should see it matching tsquery (probably to is a stopword). I did put together a custom parser that allowed underscore in words, but given my extensive C experience in the last decade (one tsearch parser) you don't want to just plug that into the live site. Someone (Gevik?) was going to have a look at it when they had the time, but I'd guess that's the one thing none of us have much of. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Problem with site doc search
Magnus Hagander wrote: Did you ever post the code to anybody other than Gevik? If not, please send it to pgsql-www and someone can give it a quick look-over (perhaps Oleg can help us there?) Will do. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Problem with site doc search
Oleg Bartunov wrote: Sergey Karpov prepared contrib/extend_parser, which we intend to use for indexing pg-related documents. It handles '_' properly, so if anybody interested, we could post it. Also, it can be useful for playing, since it's standalone contrib module. Does it make sense to back-patch the default parser for 8.4? At present, it can't handle underscores in file-paths. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] strange pg_ctl's behavior
Tatsuo Ishii wrote: Hi, I have encountered a strange pg_ctl's behavior in 8.3. pg_ctl -w -o -p 5432 start -- works pg_ctl -w -o -i start -- works pg_ctl -w -o -p 5432 -i start -- doesn't work In the last case, even postmaster starts successfully, pg_ctl keeps trying to make sure that postmaster actually started and continues to print It's not getting confused and thinking the port is 5432 -i is it? I tried -i -p 5432 and that seemed to work. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] strange pg_ctl's behavior
Richard Huxton wrote: Tatsuo Ishii wrote: Hi, I have encountered a strange pg_ctl's behavior in 8.3. pg_ctl -w -o -p 5432 start-- works pg_ctl -w -o -i start-- works pg_ctl -w -o -p 5432 -i start-- doesn't work In the last case, even postmaster starts successfully, pg_ctl keeps trying to make sure that postmaster actually started and continues to print It's not getting confused and thinking the port is 5432 -i is it? I tried -i -p 5432 and that seemed to work. Hmm - that does seem to be the case. I added a line to print connstr along with the . waiting for server to startdbname=postgres port=5483 -i connect_timeout=5. The code looks fine to my uneducated eye though (bin/pg_ctl/pg_ctl.c test_postmaster_connection starting at line 412. I think the issue must be at lines 425..443) -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] strange pg_ctl's behavior
Richard Huxton wrote: Hmm - that does seem to be the case. I added a line to print connstr along with the . waiting for server to startdbname=postgres port=5483 -i connect_timeout=5. The code looks fine to my uneducated eye though (bin/pg_ctl/pg_ctl.c test_postmaster_connection starting at line 412. I think the issue must be at lines 425..443) Line 52: #define WHITESPACE \f\n\r\t\v /* as defined by isspace() */ We've defined whitespace as not including a space character :-/ -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Behaviour of to_tsquery(stopwords only)
I'm not sure what value a tsquery has if it's composed from stopwords only, but it doesn't seem to be null or equal to itself. That strikes me as ... unintuitive, although I'm happy to be re-educated on this. I think it's because CompareTSQ (tsquery_op.c, line 142) doesn't have a case to handle query sizes of zero. That's what seems to be returned from tsearch/to_tsany.c lines ~ 345-350. SELECT qid,words,query, (query is null) AS isnull, (query = to_tsquery(words)) as issame FROM util.queries ORDER BY qid DESC LIMIT 5; NOTICE: text-search query contains only stop words or doesn't contain lexemes, ignored NOTICE: text-search query contains only stop words or doesn't contain lexemes, ignored qid | words | query| isnull | issame --+--+++ 1000 | to || f | f 999 | or || f | f 998 | requests | 'request' | f | t 997 | site | 'site' | f | t 996 | document | 'document' | f | t (5 rows) -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.orgextra=pgsql-hackers
Re: [HACKERS] Behaviour of to_tsquery(stopwords only)
Further tsquery comparison fun: = SELECT q.qid, q.query, count(*) FROM doc.documents d, util.queries q WHERE d.words @@ q.query AND (q.query::text=$$'tender'$$) GROUP BY q.qid, q.query ; qid | query | count -+--+--- 195 | 'tender' | 374 248 | 'tender' | 374 257 | 'tender' | 374 332 | 'tender' | 374 401 | 'tender' | 374 409 | 'tender' | 374 519 | 'tender' | 374 557 | 'tender' | 374 736 | 'tender' | 374 749 | 'tender' | 374 869 | 'tender' | 374 879 | 'tender' | 374 926 | 'tender' | 374 (13 rows) = SELECT q.query, count(*) FROM doc.documents d, util.queries q WHERE d.words @@ q.query AND (q.query::text=$$'tender'$$) GROUP BY q.query ; query | count --+--- 'tender' | 1870 'tender' | 1496 'tender' | 1496 (3 rows) It seems to be that the tsquery is remembering the shape of the original query, even though it's been trimmed. = SELECT q.query, min(qid), max(qid), count(*) FROM doc.documents d, util.queries q WHERE d.words @@ q.query AND (q.query::text=$$'tender'$$) GROUP BY q.query ; query | min | max | count --+-+-+--- 'tender' | 736 | 926 | 1870 (5 rows aggregated) 'tender' | 401 | 557 | 1496 (4 rows aggregated) 'tender' | 195 | 332 | 1496 (4 rows aggregated) (3 rows) = SELECT * FROM util.queries WHERE qid IN (195,248, 257, 332, 401,409,519,557,736,749,869,879,926) ORDER BY qid; qid |words| query -+-+-- 195 | can of tenders | 'tender' (3 clauses) 248 | tender the this | 'tender' (3 clauses) 257 | have tender for | 'tender' (3 clauses) 332 | for tenders of | 'tender' (3 clauses) 401 | tender with | 'tender' (2 clauses) 409 | tenders to| 'tender' (2 clauses) 519 | tender to | 'tender' (2 clauses) 557 | tenders be| 'tender' (2 clauses) 736 | tenderer| 'tender' (1 clause) 749 | tender | 'tender' (1 clause) 869 | tender | 'tender' (1 clause) 879 | tender | 'tender' (1 clause) 926 | tender | 'tender' (1 clause) (13 rows) So - is this a bug, feature, feature? -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.orgextra=pgsql-hackers
[HACKERS] Intended behaviour of SET search_path with SQL functions?
= SHOW search_path; search_path - beta (1 row) = CREATE OR REPLACE FUNCTION func_b() RETURNS SETOF int AS $$ SELECT id FROM table_a; $$ LANGUAGE sql SET search_path = alpha; ERROR: relation table_a does not exist CONTEXT: SQL function func_b = \d table_a Did not find any relation named table_a. = \d alpha.table_a Table alpha.table_a Column | Type | Modifiers +-+--- id | integer | If I temporarily create a beta.table_a then I get to create the function and afterwards it does the right thing. It also works fine with a pl/pgsql function - presumably it's all down to context on the initial parse. I can't think of a way to exploit this maliciously, or do anything other than cause a little confusion, but I'm not sure it's intentional. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.orgextra=pgsql-hackers
Re: [HACKERS] Behaviour of to_tsquery(stopwords only)
Teodor Sigaev wrote: So - is this a bug, feature, feature? It's definitely a bug: select count(*), query from queries group by query; count | query ---+-- 3 | 'tender' 4 | 'tender' 4 | 'tender' (3 rows) Will fix it soon. Ah, smashing. -- Richard Huxton Archonet Ltd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.orgextra=pgsql-hackers
[HACKERS] Full text search - altering the default parser
The default parser doesn't allow commas in numbers (I can see why, I think). SELECT ts_parse('default', '123,000'); ts_parse -- (22,123) (12,,) (22,000) One option of course is to pre-process the text, but since we can support custom parsers I thought I'd take a look at the code to teach it some flexibility on numbers. I'm guessing this would be of interest to anyone wanting to support European-style , decimal indicators too. My C is horribly rusty, so can I check I've got this right? Before I start exploring compiler errors I've not seen for decades ;-) The parser functions (prsd_xxx) are all defined in backend/tsearch/wparser_def.c The state machine is driven through the TParserStateActionItem definitions on lines 644 - 1263. Changing one of these will change the definition of the corresponding token-type. To add a new token-type, I'd add it to the various lists line 30-194, then add the relevant TParserStateActionItems. Thanks -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] select avanced
[EMAIL PROTECTED] wrote: I have the following table: The hackers list is for development of the PostgreSQL database itself. Please try reposting on the general or sql mailing lists. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Full-text search default vs specified configuration
Tom Lane wrote: Richard Huxton [EMAIL PROTECTED] writes: Would there be any support for two changes in 8.4 though? 1. Tag tsvector/tsquery's with the (oid of) their configuration? 2. Either warn or require CASCADE on changes to a configuration/dictionary that could impact existing indexes etc. IIRC, the current behavior is intentional --- Oleg and Teodor argued that tsvector values are relatively independent of small changes in configuration and we should *not* force people to, say, reindex their tables every time they add or subtract a stopword. If we had some measure of whether a TS configuration change was critical or not, it might make sense to restrict critical changes; but I fear that would be kind of hard to determine. Well, clearly in my example it didn't impact operation at all, but it's an accident waiting to happen (and more importantly, a hard one to track down). It's like running SQL-ASCII encoding, everything just ticks along only to cause problems a month later. What about the warning: This may affect existing indexes - please check. Would that cause anyone problems? What worries me is that it might take 10 messages on general/sql list to figure out the problem. This was reported as words with many hits causes problems. Maybe it's just a matter of getting the message out: always specify the config or never specify the config. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [HACKERS] distibuted transactions, SQL+XPath+XTree
Тюрин Дмитрий wrote: Hi list, Hi Dmitry, nice to have you back again. I see the following business opportunity for Postgres: I) Simple man can't program middleware to connect XML-client and Postgres. Aha! still trying to push an XML command system and http server into the backend. II) Request into several databases does not exist. Well, there are middleware layers that'll do so, or various case-specific solutions requiring dblink. The key problem is what you do with cross-database dependencies. How were you thinking of dealing with this? III) Notebooks need several switching-on and switching-off during transaction. How are you dealing with the locking issues? IV) Distance between strings are not supported, that makes aproximate searching impossible. Would that be not supported in the sense of contrib/fuzzystrmatch? V) There is no possibility to hide some (not all) records of table, granted to other users, from these users Apart from views or the veil pgfoundry project of course. I ask you to implement these solutions, that Postgres get advantage before other DBMS-es. I have prepered several drawing http://sql50.euro.ru/sql5.11.3.ppt to explain ideas. More details are described below. Were you looking to hire developers, or do you have customers who are looking to hire developers? -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [HACKERS] Permanent settings
Magnus Hagander wrote: On Tue, Feb 19, 2008 at 03:53:11PM +, Richard Huxton wrote: Magnus Hagander wrote: What I'd really like to see is something like a new keyword on the SET command, so you could to SET PERMANENT foo=bar, which would write the configuration back into postgresql.conf. I don't have a complete solution for how to actually implement it, so I'm just throwing out some ideas for comment. Not sure if it's of interest, but you might want to look at the postfix mailserver configuration setup and see if that translates to an API. postconf lists the configuration settings (in alphabetical order) SELECT * FROM pg_settings postconf -n list non-default settings SELECT * FROM pg_settings WHERE NOT source='default' postconf setting display setting = value SHOW log_destination postconf -e setting = value edit the configuration file, changing that setting That's the one remaining :-) The editing option replaces any existing version of that setting and adds the new value at the end of the file. Eh, it cannot both replace it, and add it at the end of the file, can it? Does it replace it in-line, or does it remove the in-line entry and put the new one at the end? Or are you saying it edits in-line entries and appends new ones at the end? Sorry, - Edits existing lines. - Adds new ones to end of file. - Leaves blank lines, comments etc. alone Having all the values at the end of the file works well, because for a simple setup you don't need to change many settings and they don't depend on order. Right. I don't think we have any settings that depend on order, do we? That's what I was trying to think of - nothing came to mind. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org