Re: [HACKERS] Recent vendor SSL renegotiation patches break PostgreSQL

2010-02-22 Thread Chris Campbell
On Feb 22, 2010, at 12:25 PM, Tom Lane wrote:

 I think we already missed the window where it would have been sensible
 to install a hack workaround for this.  If we'd done that in November
 it might have been reasonable, but by now it's too late for any hack
 we install to spread much faster than fixed openssl libraries.

Could we simply ignore renegotiation errors? Or change them to warnings? That 
may enable us to work with the semi-fixed OpenSSL libraries that are currently 
in the field, without disabling the functionality altogether. 

- Chris


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Recent vendor SSL renegotiation patches break PostgreSQL

2010-02-04 Thread Chris Campbell
On Feb 3, 2010, at 10:16 AM, Stefan Kaltenbrunner wrote:

 Robert Haas wrote:
 On Wed, Feb 3, 2010 at 6:24 AM, Chris Campbell chris_campb...@mac.com 
 wrote:
 The flurry of patches that vendors have recently been making to OpenSSL to 
 address
 the potential man-in-the-middle attack during SSL renegotiation have 
 disabled SSL
 renegotiation altogether in the OpenSSL libraries. Applications that make 
 use of SSL
 renegotiation, such as PostgreSQL, start failing.
 Should we think about adding a GUC to disable renegotiation until this
 blows over?
 
 hmm I wonder if we should not go as far as removing the whole renegotiation 
 code, from the field it seems that there are very very few daemons actually 
 doing that kind forced renegotiation.

There was a discussion about the relevance and consequences of SSL 
renegotiation on this list back in 2003:

http://archives.postgresql.org/pgsql-interfaces/2003-04/msg00075.php

Personally, my production servers have been patched to remove renegotiation 
completely, and I’m comfortable with the consequences of that for my usage.

- Chris


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Recent vendor SSL renegotiation patches break PostgreSQL

2010-02-03 Thread Chris Campbell
Greetings, hackers!

The flurry of patches that vendors have recently been making to OpenSSL to 
address the potential man-in-the-middle attack during SSL renegotiation have 
disabled SSL renegotiation altogether in the OpenSSL libraries. Applications 
that make use of SSL renegotiation, such as PostgreSQL, start failing.

I’ve noticed such failures on Mac OS X 10.6.2 after installing Security Update 
2010-001 (which is when Apple distributed their OpenSSL patch):

http://support.apple.com/kb/HT4004

 OpenSSL
 
 CVE-ID: CVE-2009-3555
 
 Available for: Mac OS X v10.5.8, Mac OS X Server v10.5.8, Mac OS X v10.6.2, 
 Mac OS X Server v10.6.2
 
 Impact: An attacker with a privileged network position may capture data or 
 change the operations performed in sessions protected by SSL
 
 Description: A man-in-the-middle vulnerability exists in the SSL and TLS 
 protocols. Further information is available at 
 http://www.phonefactor.com/sslgap A change to the renegotiation protocol is 
 underway within the IETF. This update disables renegotiation in OpenSSL as a 
 preventive security measure.

After installing Security Update 2010-001, any libpq connection to the server 
that exchanges more than 512MB of data (the RENEGOTIATION_LIMIT defined in 
src/backend/libpq/be-secure.c) will trigger an SSL renegotiation, which fails, 
which disconnects the client. I observed the problem on both PostgreSQL 8.1.19 
and PostgreSQL 8.4.2 (those are the only versions I have in production).

I have been working around the problem by disabling SSL renegotiation entirely 
in my PostgreSQL servers, commenting out lines 316-339 in 
src/backend/libpq/be-secure.c.

There have been reports of such SSL-related breakage on other platforms, too:

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=560205

Thanks! Happy hacking!

- Chris


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Recent vendor SSL renegotiation patches break PostgreSQL

2010-02-03 Thread Chris Campbell
Is there a way to detect when the SSL library has renegotiation disabled? 
(Either at compile-time or runtime, although runtime would definitely be better 
because we’ll change our behavior if/when the user updates their SSL library.)

If so, we could skip renegotiation when it’s disabled in the library, but 
otherwise perform renegotiation like we normally do (every 512 MB, I think it 
is).

Also, the official OpenSSL patch provides a way for the application to 
re-enable renegotiation. I don’t think all implementations will do so, though 
(e.g., some vendors might have patched it differently).

- Chris


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Deadlock with pg_dump?

2006-10-26 Thread Chris Campbell
We're getting deadlock error messages in the production database logs  
during times of inactivity, where the only other thing using the  
database (we think) is the every-15-minutes pg_dump process. There  
are still database connections up-and-running from unused Hibernate  
Java processes, but they Shouldn't Be doing anything (and shouldn't  
be holding locks, etc).


The deadlock error message looks like this:

ERROR: deadlock detected
DETAIL: Process 1120 waits for ShareLock on transaction 5847116;  
blocked by process 1171.
Process 1171 waits for ExclusiveLock on tuple (6549,28) of relation  
37637 of database 37574; blocked by process 1120.


Relation 37636 is the users table (schema attached).

Process 1120 was running an UPDATE query and changing a single row in  
the users table. The users table does have foreign keys to 4 other  
tables. Is it possible that those foreign key constraints acquire  
locks in a different order than pg_dump (as it's SELECTing from the  
tables), and it's hitting at *just* the right time to cause a deadlock?


I've tried to reproduce it on a test machine by running pgbench  
(after adding foreign keys to the pgbench tables) and pg_dump in  
tight loops in two concurrent shell scripts, but no deadlock.


Any ideas on how to track this down?

Under what conditions does a process acquire a ShareLock on another  
transaction?


Thanks!

- Chris

Table public.users
  Column  |  Type  |
Modifiers
--++-
 user_id  | integer| not null default 
nextval('users_user_id_seq'::regclass)
 user_last_name   | character varying(64)  |
 user_first_name  | character varying(64)  |
 user_middle_name | character varying(64)  |
 univ_id  | integer|
 usrtyp_id| integer|
 user_disabled| boolean| default false
 customer_id  | integer|
 sysuser_id   | integer|
 user_dob | date   |
Indexes:
users_pkey PRIMARY KEY, btree (user_id)
Foreign-key constraints:
fk_customer_id FOREIGN KEY (customer_id) REFERENCES customer(customer_id) 
ON UPDATE RESTRICT ON DELETE RESTRICT
fk_users_2 FOREIGN KEY (univ_id) REFERENCES universities(univ_id)
fk_users_3 FOREIGN KEY (usrtyp_id) REFERENCES user_type(usrtyp_id) ON 
UPDATE RESTRICT
system_user_sysuser_id_fkey FOREIGN KEY (sysuser_id) REFERENCES 
system_users(sysuser_id) ON UPDATE RESTRICT ON DELETE RESTRICT

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Deadlock with pg_dump?

2006-10-26 Thread Chris Campbell

On Oct 26, 2006, at 17:21, Tom Lane wrote:

And what was 1171 doing?  I really doubt that either of these could  
have

been pg_dump.


I know that process 1120 is a Java client (Hibernate) running an  
UPDATE query, but I have no idea what 1171 is. I doubt that 1171 was  
pg_dump, but when we turn off the pg_dump cron jobs (for 12-ish  
hours), the deadlocks go away. We usually see 5 or 6 deadlocks spread  
throughout the day. That's not definitive evidence, of course, but  
it's certainly curious.



Given that you appear to be running 8.1 (tut-tut for not saying), it
really shouldn't be a foreign key problem either.  I'm betting these
are just flat out conflicting updates of the same row(s).


Yeah, 8.1.3. Sorry about the omission.

Is there additional logging information I can turn on to get more  
details? I guess I need to see exactly what locks both processes  
hold, and what queries they were running when the deadlock occurred?  
Is that easily done, without turning on logging for *all* statements?


Thanks!

- Chris


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Deadlock with pg_dump?

2006-10-26 Thread Chris Campbell

On Oct 26, 2006, at 18:45, Tom Lane wrote:


log_min_error_statement = error would at least get you the statements
reporting the deadlocks, though not what they're conflicting against.


Would it be possible (in 8.3, say) to log the conflicting backend's  
current statement (from pg_stat_activity, perhaps)? I guess the  
conflicting backend would currently be waiting for a lock, so its  
current query (before releasing the lock) is the one we want.


Thanks!

- Chris


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Syntax bug? Group by?

2006-10-17 Thread Chris Campbell

On Oct 17, 2006, at 15:19, Peter Eisentraut wrote:


Mark Woodward wrote:

Shouldn't this work?

select ycis_id, min(tindex), avg(tindex) from y where ycis_id = 15;

ERROR:  column y.ycis_id must appear in the GROUP BY clause or be
used in an aggregate function


This would require a great deal of special-casing, in particular
knowledge of the = operator, and then the restriction to a particular
form of the WHERE clause.  For overall consistency, I don't think this
should be allowed.


In this particular case, the client constructing the query *knows*  
the value of ycis_id (since the client is generating the ycis_id =  
15 clause). It's technically just a waste of bandwidth and server  
resources to recalculate it. If you really want to replicate the  
output of the query you proposed, you could rewrite it on the client as:


select 15 as ycis_id, min(tindex), avg(tindex) from y where  
ycis_id = 15;


You could argue that the server should do this for you, but it seems  
ugly to do in the general case. And, like Peter points out, would  
need a lot of special-casing. I guess the parser could do it for  
expressions in the SELECT clause that exactly match expressions in  
the WHERE clause.


Thanks!

- Chris


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [PATCHES] Use non-deprecated APIs for dynloader/darwin.c

2006-10-08 Thread Chris Campbell

On Oct 8, 2006, at 14:29, Tom Lane wrote:


Looks good, but I don't think we want to abandon OSX 10.2 support
just yet.  I'll revise this to use a configure probe for dlopen.


Maybe we can abandon Mac OS X 10.2 in 8.3 and later? And not back- 
port these patches to the 7.x, 8.0, and 8.1 branches?


BTW, I think the configure probe (only on Darwin, correct?) should  
test for the existence of dlfcn.h.



My inclination is to apply this one now, since it only affects OSX
and should be easily testable, but to hold off on your other patch
for portable Bonjour support until 8.3 devel starts.  The portability
implications of that one are unclear, and I don't know how to test it
either, so I think putting it in now is too much risk.


The Bonjour patch wasn't intended to be portable to other platforms  
just yet. As submitted, it has the same risks/advantages as this  
dlopen() patch -- it only works on 10.3 and later, but isn't  
deprecated in 10.4.


If we want to keep 10.2 support for Bonjour, we can test for both  
DNSServiceDiscovery.h and dns_sd.h in ./configure, and prefer  
dns_sd.h if it's found (which will be the case for 10.3 and 10.4) but  
use DNSServiceDiscovery.h if not (which will be the case for 10.2).


Thanks!

- Chris


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] libreadline only used with psql?

2006-10-07 Thread Chris Campbell
I've grepped through the source code, and the only thing I can find  
that uses readline (or libedit) is psql.


Is that correct?

If that's the case, how hard would it be to link only psql with  
readline (or libedit)?


Currently, if you ./configure with readline support, -lreadine (or - 
ledit) is added to the used-by-everything LIBS variable. Can we  
create a PSQL_LIBS variable and have ./configure populate that with  
libraries that will only be needed by psql? That way, ./configure can  
put -lreadline there and keep it out of LIBS so all the other  
binaries (postmaster, pg_ctl, pg_dump, pg_restore, etc) won't require  
it.


This request would be accompanied by a patch, but I wanted to ask  
about the feasibility of a PSQL_LIBS variable before going down that  
road.


Thanks!

- Chris


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[HACKERS] hot standby system

2006-07-21 Thread Chris Campbell
The documentation [1] says this about On-line backup and point-in- 
time recovery:


If we continuously feed the series of WAL files to another machine  
that has been loaded with the same base backup file, we have a hot  
standby system: at any point we can bring up the second machine  
and it will have a nearly-current copy of the database.


Is this possible today in a stable and robust way? If so, can we  
document the procedure? If not, should we alter the documentation so  
it's not misleading? I've had several people ask me where to enable  
the hot standby feature, not realizing that PostgreSQL only has  
some of the raw materials that could be used to architect such a thing.


Thanks!

- Chris

[1] http://www.postgresql.org/docs/8.1/interactive/backup-online.html


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] lastval exposes information that currval does not

2006-07-05 Thread Chris Campbell

On Jul 5, 2006, at 14:51, Phil Frost wrote:

test=# create function bump() returns bigint language sql security  
definer as $$ select nextval('private.seq'); $$;


SECURITY DEFINER means that the function runs with the permissions of  
the role used to create the function (ran the CREATE FUNCTION  
command). Due to your # prompt, I'm guessing that you were a  
superuser when you ran this command. Thus, bump() will be run with  
the superuser's permissions.


The superuser most definitely has permissions to access private.seq.

This has nothing to do with schema security or lastval() versus  
currval().


Check out the CREATE FUNCTION documentation:

   http://www.postgresql.org/docs/8.1/interactive/sql- 
createfunction.html


- Chris


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[HACKERS] Preventing DELETE and UPDATE without a WHERE clause?

2006-06-15 Thread Chris Campbell
I heard an interesting feature request today: preventing the  
execution of a DELETE or UPDATE query that does not have a WHERE clause.


The user was worried about a typo leading to:

   DELETE FROM very_important_table

and deleting all the data. Or doing something similar with an UPDATE:

   UPDATE very_important_table SET important_column = 'Smith'

and all the rows now have their important_column set to Smith.

I was thinking that this could be accomplished with a GUC to cause  
the server to report an error if DELETE and UPDATE queries don't  
contain WHERE clauses. allow_mod_queries_without_qualifier or  
something (which would obviously default to true).


If this setting was activated (the GUC changed to false), the above  
queries could still be executed, but it would take a conscious effort  
by the user to add a WHERE clause:


   DELETE FROM very_important_table WHERE true;
   UPDATE very_important_table SET important_column = 'Smith' WHERE  
true;


Would such a patch ever be accepted?

Thanks!

- Chris


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Passing arguments to views

2006-02-03 Thread Chris Campbell

On Feb 3, 2006, at 02:09, Tino Wildenhain wrote:


Well if the view does not suit your needs, why dont you use an
set returnung function instead? Inside it you can do all the magic
you want and still use it similar to a table or view.


That's what I'm currently doing (as explained in the first message in  
the thread). But the function is a black box to the planner when  
the query is executed -- I'd like the planner to be able to combine  
the query inside the function with the outer calling query and plan  
it as one big query. Like it does with views. Thus, views with  
arguments.


We're certainly not deficient in this area (set-returning functions  
fill the need quite well), but a feature like this would go even  
further in terms of ease-of-use and performance.


Benefits of views with arguments versus functions:

* Better query execution performance because the planner can plan the  
whole query (rewriting the original query to replace references to  
the view with the view's definition -- this is how views work today)


* PostgreSQL-tracked dependancies: views create dependencies on the  
relations they reference -- functions do not


* Don't have to manually maintain a composite type for the return value

Basically, better performance and easier administration.

Thanks!

- Chris


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Passing arguments to views

2006-02-03 Thread Chris Campbell

On Feb 3, 2006, at 08:50, Martijn van Oosterhout wrote:


Hmm, we actually do inline SQL functions under certain situations, but
only for simple queries (see inline_function in
optimizer/util/clauses.c). One approach would be to expand that
function to inline more complicated things.



* Better query execution performance because the planner can plan the
whole query (rewriting the original query to replace references to
the view with the view's definition -- this is how views work today)


Well, the inlining would acheive the same effect.


So you think approaching it from the beefing up functions side  
would be better than the beefing up views side?



* PostgreSQL-tracked dependancies: views create dependencies on the
relations they reference -- functions do not


Technically a bug. We should be tracking dependancies for functions
anyway.


Well, it's impossible to track dependencies for all functions, since  
they're just strings (or compiled code in shared libraries) until  
they're executed. But maybe SQL language functions could be special- 
cased? Do you think it would be easier to add dependancy-tracking for  
functions, or would it be easier to implement this functionality  
using the more-restrictive-language but better-dependency-tracking  
view system? When you add dependencies for things that didn't have  
dependencies before (like non-SQL functions), you create all sorts of  
backwards-compatibility problems due to the ordering that things need  
to be dumped and created, and circular dependancies.


For example, this works:

CREATE FUNCTION foo(INTEGER) RETURNS INTEGER AS 'BEGIN RETURN bar 
($1-1); END;' LANGUAGE plpgsql;


CREATE FUNCTION bar(INTEGER) RETURNS INTEGER AS 'BEGIN IF $1  0  
THEN RETURN $1; ELSE RETURN foo($1); END IF; END;' LANGUAGE plpgsql;


But it wouldn't work if PostgreSQL tracked and enforced dependancies.  
But it could probably be done with SQL-language functions only. I  
don't know if we'd want to add dependancy tracking for functions if  
it only works for SQL-language functions, though.


This is a good point. Though with syntactic sugar you could work  
around

this too...


Basically, how views do it? :) By auto-creating a table with the  
proper columns (for a function, that would be an auto-created type).


I'm looking for a function/view hybrid, taking features from each. It  
seems to me that views have most of the features I want (only missing  
the ability to pass arguments), so it's a shorter distance to the  
goal than by starting with functions.


Thanks!

- Chris


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Passing arguments to views

2006-02-03 Thread Chris Campbell

On Feb 3, 2006, at 10:25, Martijn van Oosterhout wrote:


Sure, for most procedural languages you can't do much. But we do do
syntax checking already and checking that the necessary functions  
exist

can be considered part of that. It' s not terribly important though.


Dependancy tracking needs to be more than a best effort. If you  
can't do it accurately and completely, then I don't think it's worth  
doing at all.


But I agree with you that syntax checking is probably sufficient. We  
don't need true dependancy tracking for functions.


The only failure case (where syntax checking won't help you) is  
deleting (or renaming, or modifying) a table that a function was  
using. If you were to run or re-define the function, you'd learn  
about the missing (or renamed, or modified) table, whereas the  
dependancy-tracking system would prevent you from making the changes  
to the referenced table in the first place.



Ok, here's the deal. A view is nothing more than a RULE. Creating a
view does this automatically


Technically, it's a table and a rule, both of which are created by  
the CREATE VIEW command. We were talking about syntactic sugar, and  
CREATE VIEW is syntactic sugar for doing a CREATE TABLE and a CREATE  
RULE. That was my comparison. I'm aware of how views work. Here's the  
deal: I want to beef up rules versus beefing up functions. Maybe  
that's not the way to go; I'm enjoying this discussion and your  
insights.



CREATE RULE blah AS ON SELECT TO myview DO INSTEAD select statement;

Now, say you wanted to add parameters to this, would you restrict  
it to

SELECT rules, what about UPDATE or DELETE rules?


I don't see a huge use case for anything but SELECT rules, but I  
think it could be extended to any rule type. Maybe the CREATE RULE  
syntax could be something like:


CREATE RULE blah AS ON SELECT(INTEGER, INTEGER, DATE) TO myview  
DO INSTEAD SELECT * FROM sale WHERE sale_date = $3;



The other issue is that currently you can tell from looking at a
statement whether something is a function or a table (is it  
followed by

an open parenthesis?). With the above change you can't anymore, which
might mean you can't have functions and tables with the same names
because they'd be ambiguous.


Right. I said that my example syntax was deficient in this regard in  
the first message in this thread. And I solicited ideas for a better  
(unambiguous) syntax. I'm sure we would be able to come up with  
something. Maybe using square brackets instead of parentheses? Curly  
braces? myview-(1, 2, 3) notation? Since views are tables (when  
parsing the query, at least) we'd have to allow this syntax for any  
table reference, but throw an error (or silently discard the  
arguments) if the table didn't have a rule matching the argument types?


On the whole, I think allowing the server to inline SRFs would be a  
far

better way to go...


Maybe, but the highly-structured view definition syntax and  
everything that comes with it (dependancy tracking primarily) is so  
tasty. I think a little grammar hacking and a couple extra columns in  
pg_rewrite (nargs and argtypes) would get us most of the way there.


I would much rather put more stringent requirements on the programmer  
when defining his query (like a view), versus letting him submit any  
old string as a function (like a function). The database can do so  
much more when it's able to work with a better representation of the  
computation.


At the core, I want query rewriting with arguments. That sounds like  
a better fit for views/rules than functions, so that's why I keep  
coming back to it.


Thanks!

- Chris


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Multiple logical databases

2006-02-03 Thread Chris Campbell

On Feb 3, 2006, at 12:43, Rick Gigger wrote:

If he had multiple ips couldn't he just make them all listen only  
on one specific ip (instead of '*') and just use the default port?


Yeah, but the main idea here is that you could use ipfw to forward  
connections *to other hosts* if you wanted to. Basically working like  
a proxy.


- Chris


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Passing arguments to views

2006-02-03 Thread Chris Campbell

On Feb 3, 2006, at 12:27, Tom Lane wrote:


I guess I can live without the dependancy tracking. I can always dump
and reload my database to re-parse all the functions. Maybe we could
have a RELOAD FUNCTION command that would just re-parse an existing
function, so I don't have to dump and reload?


Hm?  I don't understand why you think this is needed.


Consider function foo() that references table bar. When you CREATE  
FUNCTION foo() ... AS 'SELECT ... FROM bar' you get an error message  
if bar doesn't exist. If it does exist, CREATE FUNCTION succeeds.


If you later DROP bar, you're not informed that function foo() was  
referencing it. You only find that out if you redefine foo() (using  
CREATE OR REPLACE FUNCTION and passing in the same definition, which  
fails) or if you try to run foo() (and the query fails).


If functions had true dependency tracking, then you couldn't DROP bar  
due to foo()'s dependency on it, unless you did a DROP CASCADE and  
were alerted that foo() was dropped as well.


I'm fine with those limitations. I can confirm that all of my  
functions are not referencing tables that don't exist by doing a  
CREATE OR REPLACE FUNCTION to reload each function. A pg_dump/ 
pg_restore would accomplish this, but it would be nice to have a  
RELOAD FUNCTION (or REPARSE? or VERIFY?) command that would  
just re-parse the function's source code (like CREATE FUNCTION does)  
and spit out errors if the function is referencing relations that  
don't exist. Just as a way to confirm that the table modification I  
just performed didn't break any functions. On-demand dependency  
checking, in a way.



Note that you can already do

regression=# create function fooey(int, out k1 int, out k2 int)  
returns setof record as
regression-# $$ select unique1, unique2 from tenk1 where thousand =  
$1 $$ language sql;

CREATE FUNCTION
regression=# select * from fooey(44);
  k1  |  k2
--+--
 7044 |  562
 5044 |  692
 1044 |  789
 4044 | 1875
 3044 | 3649
 2044 | 4063
 8044 | 6124
 6044 | 6451
 9044 | 6503
   44 | 7059
(10 rows)

regression=#


Learn something new every day. I'm still using 7.4 for most of my day  
job, and I can't do this without supplying a column definition list:


ERROR:  a column definition list is required for functions returning  
record


I hereby withdraw my proposal for CREATE SQL FUNCTION.

Thanks!

- Chris



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Passing arguments to views

2006-02-03 Thread Chris Campbell

On Feb 3, 2006, at 11:21, Tom Lane wrote:


The SRF concept captures what you want a whole lot better.  If the
implementation isn't up to snuff, we should improve it, not warp other
pieces of the system.


Point taken. The rewriting concept is what I'm after; if that can be  
done pre-planning with SQL functions, I'm all for it. I just thought  
that since rules already do rewriting, that's the best thing to start  
building on.



Martijn mentioned the idea of inlining SQL functions that return sets
--- this is something I've toyed with too, but not got round to  
looking

at seriously.  AFAICS it would accomplish everything that you could do
with parameters in ON SELECT rules, considering the existing
restrictions on what can be in an ON SELECT rule.  And it wouldn't
require any new concepts at all, just a few(?) pages of code.


True, as long as there's a hook to do the inlining/rewriting before  
the query's planned. I guess we can see function calls at the parse  
stage, check to see if they're SQL functions or not, grab the prosrc,  
do the substitution, then re-parse?


I guess I can live without the dependancy tracking. I can always dump  
and reload my database to re-parse all the functions. Maybe we could  
have a RELOAD FUNCTION command that would just re-parse an existing  
function, so I don't have to dump and reload?


What about auto-creating a composite type for the function's return  
type based on the query definition? (Like how CREATE VIEW creates an  
appropriate table definition.) Do you see a way for CREATE FUNCTION  
to do that? The problem is that you have to specify a return type in  
CREATE FUNCTION.


Maybe an extension to CREATE FUNCTION as a shorthand for set- 
returning SQL functions? Like:


   CREATE SQL FUNCTION sales_figures(DATE) AS SELECT ... FROM ...  
WHERE sale_date = $1;


It would (1) automatically create a composite type (newtype) for the  
return value, and (2) do a


   CREATE FUNCTION sales_figures(DATE) RETURNS SETOF newtype AS  
'...' LANGUAGE sql.


How much do I have to justify a patch for non-standard RELOAD  
FUNCTION and CREATE SQL FUNCTION commands (as described) in the  
grammar? :)


Thanks!

- Chris


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[HACKERS] Passing arguments to views

2006-02-02 Thread Chris Campbell
I've written some complicated queries that I'd like to save inside  
the server and then call from my clients using a short name. For the  
queries that require no external information, views are perfect. For  
queries that *do* require external information (like a search date  
range), I've used functions (written in the SQL language, because I'm  
just writing plain SQL queries but using $1, $2 tokens for passed-in  
arguments).


When I use these functions, I'm typically joining the results of the  
function with other tables. Since much of the work is being done  
inside the function, the planner can't use both the inside-function  
and outside-function query information when generating a query plan.  
Instead, it has to do Function Scans (planning and executing the SQL  
inside the function at each execution, I'm assuming) and then  
manipulate the output.


Ideally, I'd like to be able to write queries that contain $n tokens  
that will be substituted at execution time, save them on the server,  
and let the query planner plan the whole query before it's executed.


Basically, writing views with arguments.

For example, a sales_figures view that requires start_date and  
end_date parameters could be used like this:


   CREATE VIEW sales_figures($1, $2) AS
   SELECT ... FROM ... WHERE purchase_date BETWEEN $1 AND $2;

   SELECT * FROM sales_figures('2005-08-22', '2005-09-14') sf JOIN  
customers c ON (sf.customer_id = c.customer_id)


What do you think? Is this an interesting feature? Is this the right  
way to go about it, or should I try to get the planner to see through  
SQL function boundaries (e.g., enhance the function system instead of  
enhancing the view system)? Would this be a good project for a newbie  
to the code?


I can see that the syntax used above would be problematic: how would  
it distinguish that from a call to a sales_figures() function? Any  
comments about alternative syntax would be welcome, too!


Thanks!

- Chris



smime.p7s
Description: S/MIME cryptographic signature


Re: [HACKERS] Passing arguments to views

2006-02-02 Thread Chris Campbell

On Feb 2, 2006, at 23:33, Greg Stark wrote:

The right way to go about this in the original abstract set- 
theoretic
mindset of SQL is to code the view to retrieve all the rows and  
then apply

further WHERE clause restrictions to the results of the view.

So for example this:


CREATE VIEW sales_figures($1, $2) AS
SELECT ... FROM ... WHERE purchase_date BETWEEN $1 AND $2;


Becomes:

CREATE VIEW sales_figures AS SELECT ... FROM ...

And then you query it with

SELECT * FROM sales_figures WHERE purchase_date BETWEEN $1 AND $2


That was a very simplistic example and didn't illustrate my point --  
I apologize. I was trying to think of something succinct and  
illustrative for a quick mailing list post but came up short.


Maybe a better example would be a situation where you want to do  
substitutions in places other than the WHERE clause? There's no way  
to push that out to the calling query. But even in this simple  
case, the easier-to-grok syntax of making a view look like a function  
(and codifying the options for restricting the results as arguments  
to the view) is a nice win in terms of readability and maintainability.


I was hoping that people would overlook my bad example because  
they've had the need for a view with arguments tool in their own  
work, and the conversation would just be about how it could be  
implemented. :)


I'll try to distill a better example from some of the projects I'm  
working on.


Thanks!

- Chris


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Problem with dblink regression test

2005-06-22 Thread Chris Campbell

On Jun 22, 2005, at 12:52, Tom Lane wrote:


Jim C. Nasby [EMAIL PROTECTED] writes:


Is there a way to confirm which libpq.so psql and/or dblink.so has
linked to? Are there any other tests I could run to shed some  
light on

this?


On Linux you use ldd to find out what the linker will do with
dependencies of an executable or shared library.  I don't recall the
equivalent incantation on FreeBSD or OS X but I'm sure there is one.


On OS X, use otool -L:

$ otool -L /Library/PostgreSQL/bin/psql
/Library/PostgreSQL/bin/psql:
/Library/PostgreSQL/lib/libpq.3.dylib (compatibility version  
3.0.0, current version 3.2.0)
/usr/lib/libssl.0.9.7.dylib (compatibility version 0.9.7,  
current version 0.9.7)
/usr/lib/libcrypto.0.9.7.dylib (compatibility version 0.9.7,  
current version 0.9.7)
/usr/lib/libz.1.dylib (compatibility version 1.0.0, current  
version 1.0.0)
/usr/lib/libncurses.5.dylib (compatibility version 5.0.0,  
current version 5.0.0)
/usr/lib/libresolv.9.dylib (compatibility version 1.0.0,  
current version 324.9.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0,  
current version 71.1.1)


- Chris


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[HACKERS] New wal_sync_method for Darwin?

2005-04-15 Thread Chris Campbell
I think we should add a new wal_sync_method that will use Darwin's 
F_FULLFSYNC fcntl().

From sys/fnctl.h:
#define F_FULLFSYNC 51  /* fsync + ask the drive to 
flush to the media */

This fcntl() will basically perform an fsync() on the file, then flush 
the write cache of the disk.

I'll attempt to work up the patch. It should be trivial. Might need 
some help on the configure tests though (it should #include 
sys/fcntl.h and make sure F_FULLFSYNC is defined).

What's an appropriate name? It seems equivalent to 
fsync_writethrough. I suggest fsync_full, fsync_flushdisk, or 
something. Is there a reason we're not indicating the supported 
platform in the name of the method? Would fsync_darwinfull be better? 
Let users know that it's only available for Darwin? Should we do the 
same thing with win32-specific methods?

I think both fsync() and F_FULLFSYNC should both be available as 
options on Darwin. Currently in the code, fsync and 
fsync_writethrough set sync_method to SYNC_METHOD_FSYNC, so there's 
no way to distinguish between them.

Unsure which one would be the best default. fsync() matches the 
semantics on other platforms. And conscientious users could specify the 
F_FULLFSYNC fcntl() method if they want to make sure it goes through 
the write cache.

Comments?
Thanks!
- Chris


smime.p7s
Description: S/MIME cryptographic signature


Re: [HACKERS] pg_dump --comment?

2004-05-31 Thread Chris Campbell
Harald Fuchs wrote:
Why don't you just do
  ( echo -- This is my comment
pg_dump whatever
  )  dumpfile
?
How could I dump using the custom format, and then use dumpfile with 
pg_restore to restore the dump? If I just prepend the comment to the 
file, then pg_restore will choke, since the file won't be in the proper 
custom format. I would have to remove the comment before sending the 
file to pg_restore. Is there an easy way to do that? That can be easily 
automated, and not take a huge amount of time given a 4 gig dump file 
that must be modified?

Given those requirements, building a commenting mechanism into the 
custom format would work out very nicely, I think.

Thanks!
- Chris


smime.p7s
Description: S/MIME Cryptographic Signature


[HACKERS] pg_dump --comment?

2004-05-27 Thread Chris Campbell
I've encountered a situation where I'd like to store some information 
about the database when I do a pg_dump. For instance, the timestamp of 
the dump. And some other information that I pull from the database.

If pg_dump had a --comment flag that allowed me to pass a string that 
would be stored inside the dumpfile, that I could then retrieve in some 
way (without doing a full restore of the dump), that would meet my 
needs. In discussing this idea with other people, it sounded like a 
general-use feature that mankind as a whole could benefit from. :)

Here's what I'm envisioning:
   pg_dump --comment 'This is a comment' more pg_dump args
That would store the comment ('This is a comment') in the dump file 
somehow. The definition of somehow would vary depending on the output 
format (text, tar, or custom). Initially, since I only use the custom 
format, I would only focus on getting it to work with that. But for the 
text format, there could be a SQL comment at the top of the file with

-- COMMENT: This is a comment
or something. In the tar format, there could be a comment file in the 
archive that contains the text This is a comment.

For the custom format...I haven't looked at the format specification, so 
I don't know exactly where the comment would go. It could go at the very 
top of the file, and have a special delimiter after it. pg_restore would 
just skim over the file until the delimiter is reached, and then go on 
about its business. The benefit of this scheme is that any program could 
read the comment -- just open a file and read the bytes until the delimiter.

There could also be a pg_dump or pg_restore option that prints out the 
comment stored in a given dump file, or another binary (pg_comment?) 
that does that.

Is this a desirable feature? Should I work it up like described and 
submit a patch? Any comments/suggestions?

Thanks!
- Chris


smime.p7s
Description: S/MIME Cryptographic Signature