Re: [HACKERS] pg_dump feature
On Mon, Sep 22, 2008 at 03:25:35AM +1000, Naz wrote: > Hi all, > I brought this up a few years ago in the 7.4 days, and since there > is still no satisfactory solution to this I thought I'd raise it > again. When dumping a schema, it is often necessary to dump the > tables separately to the constraints and other non-structural > metadata. The most obvious use case for this is when updating a > schema for a database. > > Assume I have a database named "production" and one named "testing". Good so far. > Lets now say that I have decided that "testing" is correct and the > app is ready to work with it, and I now want to put the data from > "production" into the schema from "testing". That's where you should have been storing all the SQL transformations (DDL, DCL and DML) in your source code management system and testing said transformations to make sure they all fit inside one transaction :) > I hope someone agrees with me :) The above is what I've seen work. Other schemes...not so well. > السلام عليكم And upon you, peace. David. -- David Fetter <[EMAIL PROTECTED]> http://fetter.org/ Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter Skype: davidfetter XMPP: [EMAIL PROTECTED] Remember to vote! Consider donating to Postgres: http://www.postgresql.org/about/donate -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] pg_dump feature
Hi all, I brought this up a few years ago in the 7.4 days, and since there is still no satisfactory solution to this I thought I'd raise it again. When dumping a schema, it is often necessary to dump the tables separately to the constraints and other non-structural metadata. The most obvious use case for this is when updating a schema for a database. Assume I have a database named "production" and one named "testing". Lets now say that I have decided that "testing" is correct and the app is ready to work with it, and I now want to put the data from "production" into the schema from "testing". Currently, I have to dump the schema from the "testing" database, and then manually break it after the table definitions, and before any constraints. Otherwise, the data won't restore properly due to ordering issues etc. It would be far easier if there were a mechanism in pg_dump that allowed you do dump the two parts of the schema separately, allowing easy scripting of this, and not requiring you to by hand examine the file and use head/tail to apply only the correct parts of the schema at the appropriate points in the script. I've been given a few different suggestions over the years, but they all involve computational detection of the break point in the schema dump, which is unreliable and hacky. A proper mechanism in pg_dump would be next to trivial to implement (for someone familiar with the code and who could code C, which I can't otherwise I'd do it), avoid potentially silent bugs in shell scripts and would massively assist in the efficient manipulation of in-place PostgreSQL databases. I hope someone agrees with me :) السلام عليكم - Naz. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [patch] fix dblink security hole
Tom Lane wrote: What do you think about getting rid of the password_from_string state variable? It was always a bit of a kluge, and we don't seem to need it anymore with this approach. It is still used in PQconnectionUsedPassword(). That is still needed to prevent a non-superuser from logging in as the superuser if the server does not require authentication. In that case, any bogus password could be added to the connection string and be subsequently ignored, if not for this check. e.g. with a default pg_hba.conf 8<- psql contrib_regression -U luser psql (8.4devel) Type "help" for help. contrib_regression=> SELECT dblink_connect('password=luser dbname=contrib_regression'); ERROR: password is required DETAIL: Non-superuser cannot connect if the server does not request a password. HINT: Target server's authentication method must be changed. 8<- Without PQconnectionUsedPassword() that would have succeeded in logging in as the superuser, because the password is never actually checked. Joe -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [patch] fix dblink security hole
Joe Conway <[EMAIL PROTECTED]> writes: > Maybe better: > static PQconninfoOption * > conninfo_parse(const char *conninfo, PQExpBuffer errorMessage, > bool fill_defaults, bool *password_from_string) I'm thinking a separate conninfo_fill_defaults function is better, though it's not a big deal. What do you think about getting rid of the password_from_string state variable? It was always a bit of a kluge, and we don't seem to need it anymore with this approach. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [patch] fix dblink security hole
Joe Conway wrote: Tom Lane wrote: Refactoring doesn't seem like an easy way to fix this, because of the problem that the behavior of pulling up defaults is part of the API specification for PQconndefaults(). Thoughts? Hmm, I could have sworn I looked for that, and saw it elsewhere. Anyway, you are obviously correct. conninfo_parse() is presently only called from a few places -- maybe we should have conninfo_parse() really just parse, and create a new conninfo_get_missing() or some such that fills in missing values? Maybe better: static PQconninfoOption * conninfo_parse(const char *conninfo, PQExpBuffer errorMessage, bool fill_defaults, bool *password_from_string) There are only three call sites including the new one. The two originals could use fill_defaults == true, and PQconninfoParse could use false. Joe -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [patch] fix dblink security hole
Joe Conway <[EMAIL PROTECTED]> writes: > Tom Lane wrote: >> Refactoring doesn't seem like an easy way to fix this, because of the >> problem that the behavior of pulling up defaults is part of the API >> specification for PQconndefaults(). > conninfo_parse() is presently only called from a few places -- maybe we > should have conninfo_parse() really just parse, and create a new > conninfo_get_missing() or some such that fills in missing values? Doh, I must be too tired, because now that seems obvious. Will set this aside and try it again tomorrow. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [patch] fix dblink security hole
Tom Lane wrote: Joe Conway <[EMAIL PROTECTED]> writes: New patch attached. erm ... wait a minute. This approach doesn't actually solve the problem at all, because conninfo_parse is responsible for filling in various sorts of default values. In particular it would happily pull a password from the services file or the PGPASSWORD environment variable, and looking at the array after the fact doesn't tell whether that happened. Refactoring doesn't seem like an easy way to fix this, because of the problem that the behavior of pulling up defaults is part of the API specification for PQconndefaults(). Thoughts? Hmm, I could have sworn I looked for that, and saw it elsewhere. Anyway, you are obviously correct. conninfo_parse() is presently only called from a few places -- maybe we should have conninfo_parse() really just parse, and create a new conninfo_get_missing() or some such that fills in missing values? Joe -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [patch] fix dblink security hole
Joe Conway <[EMAIL PROTECTED]> writes: > New patch attached. erm ... wait a minute. This approach doesn't actually solve the problem at all, because conninfo_parse is responsible for filling in various sorts of default values. In particular it would happily pull a password from the services file or the PGPASSWORD environment variable, and looking at the array after the fact doesn't tell whether that happened. Refactoring doesn't seem like an easy way to fix this, because of the problem that the behavior of pulling up defaults is part of the API specification for PQconndefaults(). Thoughts? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [patch] fix dblink security hole
Joe Conway <[EMAIL PROTECTED]> writes: > New patch attached. This is close, but you're failing to guard against a few out-of-memory corner cases (and now that I look, PQconndefaults() is too). The libpq documentation needs more work than this, too. I'll make a cleanup pass and commit. BTW, I'm quite tempted to get rid of pgpass_from_client and simplify the specification of PQconnectionUsedPassword to be "did the server request a password?". Thoughts? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal: move column defaults into pg_attribute along with attacl
* Tom Lane ([EMAIL PROTECTED]) wrote: > Stephen Frost <[EMAIL PROTECTED]> writes: > > If we were to accept the pg_attrdef approach, why aren't we > > doing a pg_attracl table instead of adding a column to pg_attribute? > > That's actually not an unreasonable question. If you were to do that > then you could attach OIDs to the attribute ACLs, which might be a nicer > representation in pg_shdepend than you were thinking of using. What bugs me about this is that it comes across as poor database design- both of these really are attributes of a column. We're creating seperate tables for each so we can induce a cleaner ID for them, which just isn't the right approach imv. This would also be another table to go deal with when a column is removed, and a less-than-obvious place to look for this information from the user's perspective. It's also the case that the items in these tables and the columns they're attached to really are one-to-one, there's no many-to-one or one-to-many relationship between them.. At the end of the day, this approach feels like more of a kludge to me to keep the dependency system simple rather than making the dependency system support the real-world system layout, which is that columns don't have their own IDs. Maybe we could approach this another way- what about creating a new table which is pg_attrcolids that has both pg_attrdef and pg_attracl rolled into it? Then at least we're accepting that we need a distinct ID for columns, but keeping them in one specific place? Is there a reason we would need a seperate ID for each? It also strikes me to wonder about possible future support for re-ordering columns, though I don't immediately see a way to use this as a step towards supporting that. Thanks, Stephen signature.asc Description: Digital signature
Re: [HACKERS] [patch] fix dblink security hole
Tom Lane wrote: Yeah. We could make one further refinement: callers that don't care about acquiring an error string can pass NULL for the errmsg parameter. That tells PQconninfoParse to throw away the errmsg string anyway. With that, the minimal case isn't much uglier than your original: just need a NULL arg tacked onto the call. True BTW, the usual method for doing this is just to give the caller back the errorBuf.data, not incur an additional strdup that could fail. OK, was entirely sure that was safe. New patch attached. Joe Index: contrib/dblink/dblink.c === RCS file: /opt/src/cvs/pgsql/contrib/dblink/dblink.c,v retrieving revision 1.74 diff -c -r1.74 dblink.c *** contrib/dblink/dblink.c 3 Jul 2008 03:56:57 - 1.74 --- contrib/dblink/dblink.c 22 Sep 2008 02:09:39 - *** *** 93,98 --- 93,99 static HeapTuple get_tuple_of_interest(Oid relid, int2vector *pkattnums, int16 pknumatts, char **src_pkattvals); static Oid get_relid_from_relname(text *relname_text); static char *generate_relation_name(Oid relid); + static void dblink_connstr_check(const char *connstr); static void dblink_security_check(PGconn *conn, remoteConn *rconn); static void dblink_res_error(const char *conname, PGresult *res, const char *dblink_context_msg, bool fail); *** *** 165,170 --- 166,172 else \ { \ connstr = conname_or_str; \ + dblink_connstr_check(connstr); \ conn = PQconnectdb(connstr); \ if (PQstatus(conn) == CONNECTION_BAD) \ { \ *** *** 229,234 --- 231,239 if (connname) rconn = (remoteConn *) palloc(sizeof(remoteConn)); + + /* check password in connection string if not superuser */ + dblink_connstr_check(connstr); conn = PQconnectdb(connstr); MemoryContextSwitchTo(oldcontext); *** *** 246,252 errdetail("%s", msg))); } ! /* check password used if not superuser */ dblink_security_check(conn, rconn); if (connname) --- 251,257 errdetail("%s", msg))); } ! /* check password actually used if not superuser */ dblink_security_check(conn, rconn); if (connname) *** *** 2252,2257 --- 2257,2293 } static void + dblink_connstr_check(const char *connstr) + { + if (!superuser()) + { + PQconninfoOption *options; + PQconninfoOption *option; + boolconn_used_password = false; + + options = PQconninfoParse(connstr, NULL); + for (option = options; option->keyword != NULL; option++) + { + if (strcmp(option->keyword, "password") == 0) + { + if (option->val != NULL && option->val[0] != '\0') + { + conn_used_password = true; + break; + } + } + } + PQconninfoFree(options); + + if (!conn_used_password) + ereport(ERROR, + (errcode(ERRCODE_S_R_E_PROHIBITED_SQL_STATEMENT_ATTEMPTED), + errmsg("password is required"), + errdetail("Non-superuser must provide a password in the connection string."))); + } + } + + static void dblink_res_error(const char *conname, PGresult *res, const char *dblink_context_msg, bool fail) { int level; Index: doc/src/sgml/libpq.sgml === RCS file: /opt/src/cvs/pgsql/doc/src/sgml/libpq.sgml,v retrieving revision 1.263 diff -c -r1.263 libpq.sgml *** doc/src/sgml/libpq.sgml 19 Sep 2008 20:06:13 - 1.263 --- doc/src/sgml/libpq.sgml 22 Sep 2008 02:08:50 - *** *** 625,630 --- 625,661 + PQconninfoParsePQconninfoParse + + +Returns parsed connection options from the provided connection string. + + + PQconninfoOption *PQconninfoParse(const char *conninfo, char **errmsg); + + + +Returns a connection options array. This can be used to determine +the PQconnectdb options in the provided +connection string. The return value points to an array of +PQconninfoOption structures, which ends +with an entry having a null keyword pointer. The +null pointer is returned if an error occurs. In this case, +errmsg contains the error message. Passing +NULL for errmsg tells +PQconninfoParse to throw away the error message. + + + +After processing the options array, free it by passing it to +PQconninfoFree. If this is not done, a small amount of memory +is leaked for each call to PQconndefaults. + + + + + + PQfinishPQfinish Index: src/interfaces/libpq/exports.txt === RCS file: /opt/src/cvs/pgsql/src/interfaces/libpq/exports.txt,v retrieving revision 1.21 diff -c -r1.21 exports.txt *** src/interfaces/libpq/exports.txt 19 Sep 2008 20:06:13 - 1.21 --- src/interfaces/libpq/exports.txt 21 Sep 200
Re: [HACKERS] Proposal: move column defaults into pg_attribute along with attacl
Stephen Frost <[EMAIL PROTECTED]> writes: > Honestly, I really disliked the code which assumed pg_attribute had no > NULLable/toastable columns and used what seemed like pretty gruesome > hacks to create pg_attribute structures. Agreed, but that seems orthogonal to the point here, which is that a column's default expression is a distinct object for dependency purposes and so it needs its own ID. An OID in the pg_attrdef catalog works nicely for that; the alternatives I've thought of seem like kluges. > If we were to accept the pg_attrdef approach, why aren't we > doing a pg_attracl table instead of adding a column to pg_attribute? That's actually not an unreasonable question. If you were to do that then you could attach OIDs to the attribute ACLs, which might be a nicer representation in pg_shdepend than you were thinking of using. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal: move column defaults into pg_attribute along with attacl
* Tom Lane ([EMAIL PROTECTED]) wrote: > I can think of a way around that: represent a default expression using > classid = OID of pg_attribute, objid = OID of table, objsubid = column > attnum. This is distinct from the column itself, which is represented > with classid = OID of pg_class. It seems pretty ugly and potentially > confusing though. Also there would be a compatibility issue for clients > that examine pg_depend. Is it ugly enough to scuttle the whole concept > of merging pg_attrdef into pg_attribute? Being able to handle a setup like that (though in pg_shdepend) might be necessary anyway, depending on the approach we're happiest with for column-level acl dependencies. Right now I've avoided it by just going through all of the columns and the table level grantors/grantees and listing them all when updating the dependencies. That keeps the dependency system simple but complicates other things for it. I posed a question previously about how people felt and don't recall there being any response as yet. Certainly, if we move to objid=table OID, objsubid=column attnum in pg_shdepend, it strikes me that we should do the same in pg_depend where appropriate, otherwise it'll just be a confusing inconsistancy. Honestly, I really disliked the code which assumed pg_attribute had no NULLable/toastable columns and used what seemed like pretty gruesome hacks to create pg_attribute structures. I'd also really like to get away from things which approached pg_attribute in that way, such as pg_attrdef. If we were to accept the pg_attrdef approach, why aren't we doing a pg_attracl table instead of adding a column to pg_attribute? Thanks, Stephen signature.asc Description: Digital signature
Re: [HACKERS] [patch] fix dblink security hole
Joe Conway <[EMAIL PROTECTED]> writes: > Tom Lane wrote: >> Uh, you're confusing the backend environment with libpq's much more >> spartan lifestyle. errmsg will be malloc'd and it will *not* go away >> unless the caller free()s it. > Yup, just figured that out. Otherwise OK with it? Yeah. We could make one further refinement: callers that don't care about acquiring an error string can pass NULL for the errmsg parameter. That tells PQconninfoParse to throw away the errmsg string anyway. With that, the minimal case isn't much uglier than your original: just need a NULL arg tacked onto the call. BTW, the usual method for doing this is just to give the caller back the errorBuf.data, not incur an additional strdup that could fail. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [patch] fix dblink security hole
Tom Lane wrote: Joe Conway <[EMAIL PROTECTED]> writes: If the return value is NULL, use errmsg if you'd like. I'd guess in most instances you don't even need to bother freeing errmsg as it is in a limited life memory context. Uh, you're confusing the backend environment with libpq's much more spartan lifestyle. errmsg will be malloc'd and it will *not* go away unless the caller free()s it. Yup, just figured that out. Otherwise OK with it? Joe -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [patch] fix dblink security hole
Joe Conway <[EMAIL PROTECTED]> writes: > If the return value is NULL, use errmsg if you'd like. I'd guess in most > instances you don't even need to bother freeing errmsg as it is in a > limited life memory context. Uh, you're confusing the backend environment with libpq's much more spartan lifestyle. errmsg will be malloc'd and it will *not* go away unless the caller free()s it. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [patch] fix dblink security hole
Tom Lane wrote: Hmm ... one problem with this is that the caller can't tell failure-because-out-of-memory from failure-because-string-is-bogus. Is it worth having the PQconninfoParse function pass back the error message to avoid this corner case? I thought briefly about it, and wasn't sure it would be worth the ugliness. The API would be a lot more ugly, perhaps int PQconninfoParse(const char *connstr, PQconninfoOption **options, char **errmsg) okay: *options is set, *errmsg is NULL, return true bogus string: *options is NULL, *errmsg is set, return false out of memory: both outputs NULL, return false conninfo_parse() returns NULL on error, so why not something like: PQconninfoOption * PQconninfoParse(const char *conninfo, char **errmsg) { PQExpBufferData errorBuf; boolpassword_from_string; PQconninfoOption *connOptions; initPQExpBuffer(&errorBuf); connOptions = conninfo_parse(conninfo, &errorBuf, &password_from_string); if (!connOptions && errmsg) *errmsg = pstrdup(errorBuf.data); termPQExpBuffer(&errorBuf); return connOptions; } If the return value is NULL, use errmsg if you'd like. I'd guess in most instances you don't even need to bother freeing errmsg as it is in a limited life memory context. Joe -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [patch] fix dblink security hole
Joe Conway <[EMAIL PROTECTED]> writes: > Tom Lane wrote: >> So that seems to tilt the decision towards exposing the conninfo_parse >> function. Joe, do you want to have a go at it, or shall I? > Here's a first shot. Hmm ... one problem with this is that the caller can't tell failure-because-out-of-memory from failure-because-string-is-bogus. That doesn't matter for your proposed dblink patch, but I had been thinking of documenting that if someone wanted to get an error message explaining just what was wrong with the conninfo string, they could try to open a connection with it and use the resulting failure message. But it's just barely conceivable that the PQconnect call *wouldn't* fail because out-of-memory. (Not very likely in a sequential application, but definitely seems possible in a multithreaded app --- some other thread could release memory meanwhile.) Is it worth having the PQconninfoParse function pass back the error message to avoid this corner case? The API would be a lot more ugly, perhaps int PQconninfoParse(const char *connstr, PQconninfoOption **options, char **errmsg) okay: *options is set, *errmsg is NULL, return true bogus string: *options is NULL, *errmsg is set, return false out of memory: both outputs NULL, return false regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [patch] fix dblink security hole
Tom Lane wrote: "Marko Kreen" <[EMAIL PROTECTED]> writes: On 9/21/08, Joe Conway <[EMAIL PROTECTED]> wrote: Why? pg_service does not appear to support wildcards, so what is the attack vector? "service=foo host=custom" The proposal to require a password = foo entry in the conn string seems to resolve all of these, without taking away useful capability. I don't think that forbidding use of services altogether is a good thing. So that seems to tilt the decision towards exposing the conninfo_parse function. Joe, do you want to have a go at it, or shall I? Here's a first shot. Notes: 1. I have not removed PQconnectionUsedPassword and related. It is still needed to prevent a non-superuser from logging in as the superuser if the server does not require authentication. In that case, any bogus password could be added to the connection string and be subsequently ignored, if not for this check. 2. I assume this ought to be applied as two separate commits -- one for libpq, and one for dblink. 3. I can't easily verify that I got libpq.sgml perfect; I've gotten out of sync with the required tool chain for the docs Comments? Joe Index: contrib/dblink/dblink.c === RCS file: /opt/src/cvs/pgsql/contrib/dblink/dblink.c,v retrieving revision 1.74 diff -c -r1.74 dblink.c *** contrib/dblink/dblink.c 3 Jul 2008 03:56:57 - 1.74 --- contrib/dblink/dblink.c 22 Sep 2008 00:34:17 - *** *** 93,98 --- 93,99 static HeapTuple get_tuple_of_interest(Oid relid, int2vector *pkattnums, int16 pknumatts, char **src_pkattvals); static Oid get_relid_from_relname(text *relname_text); static char *generate_relation_name(Oid relid); + static void dblink_connstr_check(const char *connstr); static void dblink_security_check(PGconn *conn, remoteConn *rconn); static void dblink_res_error(const char *conname, PGresult *res, const char *dblink_context_msg, bool fail); *** *** 165,170 --- 166,172 else \ { \ connstr = conname_or_str; \ + dblink_connstr_check(connstr); \ conn = PQconnectdb(connstr); \ if (PQstatus(conn) == CONNECTION_BAD) \ { \ *** *** 229,234 --- 231,239 if (connname) rconn = (remoteConn *) palloc(sizeof(remoteConn)); + + /* check password in connection string if not superuser */ + dblink_connstr_check(connstr); conn = PQconnectdb(connstr); MemoryContextSwitchTo(oldcontext); *** *** 246,252 errdetail("%s", msg))); } ! /* check password used if not superuser */ dblink_security_check(conn, rconn); if (connname) --- 251,257 errdetail("%s", msg))); } ! /* check password actually used if not superuser */ dblink_security_check(conn, rconn); if (connname) *** *** 2252,2257 --- 2257,2293 } static void + dblink_connstr_check(const char *connstr) + { + if (!superuser()) + { + PQconninfoOption *options; + PQconninfoOption *option; + boolconn_used_password = false; + + options = PQconninfoParse(connstr); + for (option = options; option->keyword != NULL; option++) + { + if (strcmp(option->keyword, "password") == 0) + { + if (option->val != NULL && option->val[0] != '\0') + { + conn_used_password = true; + break; + } + } + } + PQconninfoFree(options); + + if (!conn_used_password) + ereport(ERROR, + (errcode(ERRCODE_S_R_E_PROHIBITED_SQL_STATEMENT_ATTEMPTED), + errmsg("password is required"), + errdetail("Non-superuser must provide a password in the connection string."))); + } + } + + static void dblink_res_error(const char *conname, PGresult *res, const char *dblink_context_msg, bool fail) { int level; Index: doc/src/sgml/libpq.sgml === RCS file: /opt/src/cvs/pgsql/doc/src/sgml/libpq.sgml,v retrieving revision 1.263 diff -c -r1.263 libpq.sgml *** doc/src/sgml/libpq.sgml 19 Sep 2008 20:06:13 - 1.263 --- doc/src/sgml/libpq.sgml 21 Sep 2008 23:08:27 - *** *** 625,630 --- 625,658 + PQconninfoParsePQconninfoParse + + +Returns parsed connection options from the provided connection string. + + + PQconninfoOption *PQconninfoParse(const char *conninfo); + + + +Returns a connection options array. This can be used to determine +the PQconnectdb options in the provided +connection string. The return value points to an array of +PQconninfoOption structures, which ends +with an entry having a null keyword pointer. The +null pointer is returned if memory could not be allocated. + + + +After processing the options array, free it by passing it to +PQconninfoFree. If this is not done, a small amount of memory +
Re: [HACKERS] Predictable order of SQL commands in pg_dump
Great! Would it be implemented in a next version? Seems it would be very helpful, especially for people who commit database structure to CVS/SVN once per minute to track changes history (or similar)... On Sun, Sep 21, 2008 at 11:57 PM, Tom Lane <[EMAIL PROTECTED]> wrote: > "Dmitry Koterov" <[EMAIL PROTECTED]> writes: >> CREATE TRIGGER t000_set_id >> -BEFORE INSERT OR DELETE OR UPDATE ON a >> +BEFORE INSERT OR DELETE OR UPDATE ON b >> FOR EACH ROW >> EXECUTE PROCEDURE i_trg(); > >> CREATE TRIGGER t000_set_id >> -BEFORE INSERT OR DELETE OR UPDATE ON b >> +BEFORE INSERT OR DELETE OR UPDATE ON a >> FOR EACH ROW >> EXECUTE PROCEDURE i_trg(); > >> You see, object names are the same, but ordering is mixed. > > Yeah, because the sort is just on object name. > > For objects of the same type I suppose we could take a look at their > owning object's name too ... > >regards, tom lane > -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Toasted table not deleted when no out of line columns left
"=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?=" <[EMAIL PROTECTED]> writes: >> ... And implementing it would require introducing weird >> corner cases into the tuple toaster, because it might now come across >> TOAST pointers that point to a no-longer-existent table, and have to >> consider that to be a no-op instead of an error condition. > we will compile a patch within the next days to cover this case. I'm not sure which part of "no" you didn't understand, but: I do not believe this is worth making the toast code less robust for. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [patch] fix dblink security hole
Joe Conway <[EMAIL PROTECTED]> writes: > If we push the responsibility back to dblink, we might as well export > conninfo_parse() or some wrapper thereof and let dblink simply check for > a non-null password from the very beginning. That's not totally unreasonable, since we already export the PQconninfoOption struct ... > Or perhaps we should modify conninfo_parse() to throw an error if it > sees the same option more than once. Then dblink could prepend > pgpassfile (or ignore_pgpass) to the beginning of the connstr and not > have to worry about being overridden. Not sure if the backward > compatibility hit is worth it though. ... but I think I like the second one better; multiple specifications of an option seem like probably a programming error anyway. It's a close call though. Exporting the parse code might enable other uses besides this one. In either case we'd still need a check after connection to verify that the password actually got *used*, so I guess that PQconnectionUsedPassword isn't dead, just incomplete. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Foreign key constraint for array-field?
On Sun, 2008-09-21 at 15:07 -0400, Andrew Dunstan wrote: > > Simon Riggs wrote: > > No, its not possible. Need a trigger. > > > > I think we should support it though. If we extend the relational model > > with arrays then it would be sensible if we support this aspect as > > well. > > > > Implementation would be fairly straightforward. ri_triggers currently > > assumes a non-array value is being checked, but that could be changed to > > IN(array). Multi-column keys with arrays sound confusing though. > > > > What's the syntax going to look like? The ALTER TABLE would have exactly the same syntax. -- Simon Riggs www.2ndQuadrant.com PostgreSQL Training, Services and Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal: move column defaults into pg_attribute along with attacl
On Sun, Sep 21, 2008 at 11:09 AM, Tom Lane <[EMAIL PROTECTED]> wrote: > A possible objection to this plan is that if the column-level privileges > patch doesn't get in, then we're left with a useless column in > pg_attribute. But an always-null column doesn't cost much of anything, > and we know that sooner or later we will support per-column ACLs: > they are clearly useful as well as required by spec. So any > inefficiency would only be transient anyway. > > Thoughts, objections? Hrm, I thought if anything we wanted to put them in pg_constraints (at least inherited ones). Now maybe I have defaults confused with NOT NULLs... But don't we want to be able to give defaults names and and such? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Predictable order of SQL commands in pg_dump
"Dmitry Koterov" <[EMAIL PROTECTED]> writes: > CREATE TRIGGER t000_set_id > -BEFORE INSERT OR DELETE OR UPDATE ON a > +BEFORE INSERT OR DELETE OR UPDATE ON b > FOR EACH ROW > EXECUTE PROCEDURE i_trg(); > CREATE TRIGGER t000_set_id > -BEFORE INSERT OR DELETE OR UPDATE ON b > +BEFORE INSERT OR DELETE OR UPDATE ON a > FOR EACH ROW > EXECUTE PROCEDURE i_trg(); > You see, object names are the same, but ordering is mixed. Yeah, because the sort is just on object name. For objects of the same type I suppose we could take a look at their owning object's name too ... regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Foreign key constraint for array-field?
> I strongly suspect you'd benefit a lot more by learning database best > practices rather than assuming, as you appear to be doing, that you > are dealing with a new field and that you know it best. Neither is true. Of course, you absolutely right. I venerate you! O! :-) -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] parallel pg_restore
Andrew Dunstan <[EMAIL PROTECTED]> writes: > I am working on getting parallel pg_restore working. I'm currently > getting all the scaffolding working, and hope to have a naive prototype > posted within about a week. > The major question is how to choose the restoration order so as to > maximize efficiency both on the server and in reading the archive. One of the first software design principles I ever learned was to separate policy from mechanism. ISTM in this first cut you ought to concentrate on mechanism and let the policy just be something dumb (but coded separately from the infrastructure). We can refine it after that. > Another question is what we should do if the user supplies an explicit > order with --use-list. I'm inclined to say we should stick strictly with > the supplied order. Or maybe that should be an option. Hmm. I think --use-list is used more for selecting a subset of items to restore than for forcing a nondefault restore order. Forcing the order used to be a major purpose, but that was years ago before we had the dependency-driven-restore-order code working. So I'd vote that the default behavior is to still allow parallel restore when this option is used, and we should provide an orthogonal option that disables use of parallel restore. You'd really want the latter anyway for some cases, ie, when you don't want the restore trying to hog the machine. Maybe the right form for the extra option is just a limit on how many connections to use. Set it to one to force the exact restore order, and to other values to throttle how much of the machine the restore tries to eat. One problem here though is that you'd need to be sure you behave sanely when there is a dependency chain passing through an object that's not to be restored. The ordering of the rest of the chain still ought to honor the dependencies I think. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [patch] fix dblink security hole
"Marko Kreen" <[EMAIL PROTECTED]> writes: > On 9/21/08, Joe Conway <[EMAIL PROTECTED]> wrote: >> Why? pg_service does not appear to support wildcards, so what is the attack >> vector? > "service=foo host=custom" The proposal to require a password = foo entry in the conn string seems to resolve all of these, without taking away useful capability. I don't think that forbidding use of services altogether is a good thing. So that seems to tilt the decision towards exposing the conninfo_parse function. Joe, do you want to have a go at it, or shall I? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] parallel pg_restore
Tom Lane wrote: Andrew Dunstan <[EMAIL PROTECTED]> writes: I am working on getting parallel pg_restore working. I'm currently getting all the scaffolding working, and hope to have a naive prototype posted within about a week. The major question is how to choose the restoration order so as to maximize efficiency both on the server and in reading the archive. One of the first software design principles I ever learned was to separate policy from mechanism. ISTM in this first cut you ought to concentrate on mechanism and let the policy just be something dumb (but coded separately from the infrastructure). We can refine it after that. Indeed, that's exactly what I'm doing. However, given that time for the 8.4 window is short, I thought it would be sensible to get people thinking about what the policy might be, while I get on with the mechanism. Another question is what we should do if the user supplies an explicit order with --use-list. I'm inclined to say we should stick strictly with the supplied order. Or maybe that should be an option. Hmm. I think --use-list is used more for selecting a subset of items to restore than for forcing a nondefault restore order. Forcing the order used to be a major purpose, but that was years ago before we had the dependency-driven-restore-order code working. So I'd vote that the default behavior is to still allow parallel restore when this option is used, and we should provide an orthogonal option that disables use of parallel restore. You'd really want the latter anyway for some cases, ie, when you don't want the restore trying to hog the machine. Maybe the right form for the extra option is just a limit on how many connections to use. Set it to one to force the exact restore order, and to other values to throttle how much of the machine the restore tries to eat. My intention is to have single-thread restore remain the default, at least for this go round, and have the user be able to choose --multi-thread=nn to specify the number of concurrent connections to use. One problem here though is that you'd need to be sure you behave sanely when there is a dependency chain passing through an object that's not to be restored. The ordering of the rest of the chain still ought to honor the dependencies I think. Right. I think we'd need to fake doing a full restore and omit actually restoring items not on the passed in list. That should be simple enough. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal: move column defaults into pg_attribute along with attacl
"Alex Hunsaker" <[EMAIL PROTECTED]> writes: > Hrm, I thought if anything we wanted to put them in pg_constraints (at > least inherited ones). Now maybe I have defaults confused with NOT > NULLs... But don't we want to be able to give defaults names and and > such? No, I think you're thinking of NOT NULL. There's no reason to have names for defaults that I can see. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [patch] fix dblink security hole
Tom Lane wrote: BTW, a possible hole in this scheme would be if a user could supply a conninfo string that was intentionally malformed in a way that would cause a tacked-on pgpassfile option to be ignored by libpq. We might need to add some validity checks to dblink, or tighten libpq's own checks. If we push the responsibility back to dblink, we might as well export conninfo_parse() or some wrapper thereof and let dblink simply check for a non-null password from the very beginning. Or perhaps we should modify conninfo_parse() to throw an error if it sees the same option more than once. Then dblink could prepend pgpassfile (or ignore_pgpass) to the beginning of the connstr and not have to worry about being overridden. Not sure if the backward compatibility hit is worth it though. Joe -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [patch] fix dblink security hole
Tom Lane wrote: "Marko Kreen" <[EMAIL PROTECTED]> writes: On 9/21/08, Joe Conway <[EMAIL PROTECTED]> wrote: Why? pg_service does not appear to support wildcards, so what is the attack vector? "service=foo host=custom" The proposal to require a password = foo entry in the conn string seems to resolve all of these, without taking away useful capability. I don't think that forbidding use of services altogether is a good thing. So that seems to tilt the decision towards exposing the conninfo_parse function. Joe, do you want to have a go at it, or shall I? Agreed. I'll take a stab at it. Joe -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Toasted table not deleted when no out of line columns left
*snip* Judging from that, the toasted table cleanup may be part of ALTER TABLE DROP COLUMN. That would only help if you were dropping the last potentially- toastable column of a table. And implementing it would require introducing weird corner cases into the tuple toaster, because it might now come across TOAST pointers that point to a no-longer-existent table, and have to consider that to be a no-op instead of an error condition. regards, tom lane tom, in our test case we had a table with 10 integer columns (nothing else) along with a 10 gb toast table - this is why we were a little surprised. in this case it can definitely be cleaned up. it is clear that we definitely don't want to change columns directly here when a column is dropped. - however, if there is not a single toastable column left, we should definitely clean up. we will compile a patch within the next days to cover this case. many thanks, hans -- Cybertec Schönig & Schönig GmbH Gröhrmühlgasse 26 A-2700 Wiener Neustadt Web: www.postgresql-support.de -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Assert Levels
On Fri, 19 Sep 2008, Tom Lane wrote: Well, there are certain things that --enable-cassert turns on that are outrageously expensive...I don't think anyone knows what the performance impact of just the regular Asserts is; it's been too long since these other things were stuck in there. The next time I'm doing some performance testing I'll try to quantify how much damage the expensive ones do by playing with pg_config_manual.h. Normally I'm testing with 1GB+ of shared_buffers which makes the current assert scheme unusable. If I could run with asserts on only only lose a small amount of performance just by disabling the clobber and context checking, that would be valuable to know. Right now I waste a fair amount of time running performance and assert builds in parallel. -- * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [patch] fix dblink security hole
On 9/21/08, Joe Conway <[EMAIL PROTECTED]> wrote: > Marko Kreen wrote: > > You need to ignore pg_service also. (And PGPASSWORD) > > Why? pg_service does not appear to support wildcards, so what is the attack > vector? "service=foo host=custom" > And on PGPASSWORD, the fine manual says the following: > > PGPASSWORD sets the password used if the server demands password > authentication. Use of this environment variable is not recommended > for security reasons (some operating systems allow non-root users to > see process environment variables via ps); instead consider using the > ~/.pgpass file (see Section 30.13). That does not mean it's OK to handle it insecurely. If you want to solve the immediate problem with hack, then the cleanest hack would be "no-external-sources-for-connection-details"-hack. Leaving the less probable paths open is just sloppy attitude. > At the moment the only real issue I can see is .pgpass when wildcards are > used for hostname:port:database. Well, the real issue is that lusers are allowed to freely launch connections, that's the source for all the other problems. -- marko -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Assert Levels
Greg Smith <[EMAIL PROTECTED]> writes: > The next time I'm doing some performance testing I'll try to quantify how > much damage the expensive ones do by playing with pg_config_manual.h. > Normally I'm testing with 1GB+ of shared_buffers which makes the current > assert scheme unusable. There is a commit-time scan of the buffers for the sole purpose of asserting a few things about their state. That's one of the things we'd need to turn off for a "cheap asserts only" mode I think. If you want to try to quantify what "cheap asserts" might do, you should pull out the #ifdef USE_ASSERT_CHECKING code here: check_list_invariants in list.c the loops in AtEOXact_Buffers and AtEOXact_LocalBuffers the loop in AtEOXact_CatCache the test that makes AtEOXact_RelationCache scan the relcache even when !need_eoxact_work in addition to the memory-related stuff. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] parallel pg_restore
I am working on getting parallel pg_restore working. I'm currently getting all the scaffolding working, and hope to have a naive prototype posted within about a week. The major question is how to choose the restoration order so as to maximize efficiency both on the server and in reading the archive. My thoughts are currently running something like this: * when an item is completed, reduce the dependency count for each item that depends on it by 1. * when an item has a dependency count of 0 it is available for execution, and gets moved to the head of the queue. * when a new worker spot becomes available, if there not currently a data load running then pick the first available data load, otherwise pick the first available item. This would mean that loading a table would probably be immediately followed by creation of its indexes, including PK and UNIQUE constraints, thus taking possible advantage of synchronised scans, data in file system buffers, etc. Another question is what we should do if the user supplies an explicit order with --use-list. I'm inclined to say we should stick strictly with the supplied order. Or maybe that should be an option. Thoughts and comments welcome. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Predictable order of SQL commands in pg_dump
Unfortunately, I cannot reproduce this with 100% effect. But, time to time I execute diff utility for a database and notice that two or more trigger or constraint definitions (or something else) are permuted. Something like this: +ALTER TABLE ONLY a +ADD CONSTRAINT "fk_b_Id" FOREIGN KEY (b_id) REFERENCES b(id) MATCH FULL; -ALTER TABLE ONLY a -ADD CONSTRAINT fk_b_id FOREIGN KEY (b_id) REFERENCES b(id) MATCH FULL; -ALTER TABLE ONLY a +ALTER TABLE ONLY a ADD CONSTRAINT fk_c_id FOREIGN KEY (c_id) REFERENCES c(id) MATCH FULL; Or that: CREATE TRIGGER t000_set_id -BEFORE INSERT OR DELETE OR UPDATE ON a +BEFORE INSERT OR DELETE OR UPDATE ON b FOR EACH ROW EXECUTE PROCEDURE i_trg(); CREATE TRIGGER t000_set_id -BEFORE INSERT OR DELETE OR UPDATE ON b +BEFORE INSERT OR DELETE OR UPDATE ON a FOR EACH ROW EXECUTE PROCEDURE i_trg(); You see, object names are the same, but ordering is mixed. Seems pg_dump orders objects with no care about their dependencies? So, if object names are the same, it dumps it in unpredictable order, no matter on their contents... On Sun, Sep 21, 2008 at 5:28 AM, Tom Lane <[EMAIL PROTECTED]> wrote: > "Dmitry Koterov" <[EMAIL PROTECTED]> writes: >> Utility pg_dump dumps the identical database schemas not always >> identically: sometimes it changes an order of SQL statements. > > Please provide a concrete example. The dump order for modern servers > (ie, since 7.3) is by object type, and within a type by object name, > except where another order is forced by dependencies. And there is no > random component to the dependency solver ;-). So it should be > behaving the way you want. > >regards, tom lane > -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Foreign key constraint for array-field?
On Sun, Sep 21, 2008 at 10:49:56PM +0400, Dmitry Koterov wrote: > Normalization is not a panacea here. Sometimes such normalization > creates too much overeat and a lot of additional code (especially if > there are a lot of such dependencies). Array support in Postgres is > quite handy; in my practive, moving from a_b_map to arrays > economizes hundreds of lines of stored procedure and calling > application code. There are plenty of ways to "economize," as you put it. The burden is on you to demonstrate that you are doing the right thing here because standard database practice hammered out over decades is to normalize. It's possible to make writeable VIEWs that accomplish what you appear to want, but there's no reason to go further than that on the PostgreSQL side. :) > Triggers are not very helpful here, because it is too boringly to > control that all needed tables has appropriate triggers (we need N + > 1 triggers with unique code, where N is the number of referring > tables). > > So, built-in support looks much more interesting... I strongly suspect you'd benefit a lot more by learning database best practices rather than assuming, as you appear to be doing, that you are dealing with a new field and that you know it best. Neither is true. Cheers, David. -- David Fetter <[EMAIL PROTECTED]> http://fetter.org/ Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter Skype: davidfetter XMPP: [EMAIL PROTECTED] Remember to vote! Consider donating to Postgres: http://www.postgresql.org/about/donate -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Foreign key constraint for array-field?
Simon Riggs wrote: No, its not possible. Need a trigger. I think we should support it though. If we extend the relational model with arrays then it would be sensible if we support this aspect as well. Implementation would be fairly straightforward. ri_triggers currently assumes a non-array value is being checked, but that could be changed to IN(array). Multi-column keys with arrays sound confusing though. What's the syntax going to look like? cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Foreign key constraint for array-field?
Normalization is not a panacea here. Sometimes such normalization creates too much overeat and a lot of additional code (especially if there are a lot of such dependencies). Array support in Postgres is quite handy; in my practive, moving from a_b_map to arrays economizes hundreds of lines of stored procedure and calling application code. Triggers are not very helpful here, because it is too boringly to control that all needed tables has appropriate triggers (we need N + 1 triggers with unique code, where N is the number of referring tables). So, built-in support looks much more interesting... On Sun, Sep 21, 2008 at 8:46 AM, Joshua D. Drake <[EMAIL PROTECTED]> wrote: > David Fetter wrote: >> >> On Sun, Sep 21, 2008 at 04:38:56AM +0400, Dmitry Koterov wrote: >>> >>> Hello. >>> >>> Is it possible to create a foreign key constraint for ALL elements of >>> an array field? >> >> Whether it's possible or not--it probably is--it's a very bad idea. >> Just normalize :) > > +1 > >> >> Cheers, >> David. > > > -- > Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-hackers > -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_settings.sourcefile patch is a security breach
Tom Lane wrote: > We go to some lengths to prevent non-superusers from examining > data_directory and other values that would tell them exactly where the > PG data directory is in the server's filesystem. The recently applied > patch to expose full pathnames of GUC variables' source files blows a > hole a mile wide in that. > > Possible answers: don't show the path, only the file name; or > show sourcefile/sourceline as NULL to non-superusers. My vote goes for showing it as NULL to non-superusers. If we remove the path, that makes it pretty darn useless for admin tools - which was the main reason it was added in the first place.. And "showing full path for superuser, just filename for non-superusers" just seems to be way too ugly to consider :-) //Magnus -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] pg_settings.sourcefile patch is a security breach
We go to some lengths to prevent non-superusers from examining data_directory and other values that would tell them exactly where the PG data directory is in the server's filesystem. The recently applied patch to expose full pathnames of GUC variables' source files blows a hole a mile wide in that. Possible answers: don't show the path, only the file name; or show sourcefile/sourceline as NULL to non-superusers. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal: move column defaults into pg_attribute along with attacl
[EMAIL PROTECTED] writes: > pgadmin has some umm, interesting queries over pg_depends. It sounds > like this change could complicate those. I doubt it's an > insurmountable problem of course. Yeah. But the only real point of the change is cleanliness, and if it's injecting ugliness into clients then that argument loses a lot of its force. I looked at pg_dump a bit and it definitely would get uglier --- not really more complicated, but defaults would get handled a bit differently from everything else. Seems like a fertile source of bugs. So maybe we'd better forget the idea :-( regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal: move column defaults into pg_attribute along with attacl
pgadmin has some umm, interesting queries over pg_depends. It sounds like this change could complicate those. I doubt it's an insurmountable problem of course. On 9/21/08, Tom Lane <[EMAIL PROTECTED]> wrote: > "Joshua D. Drake" <[EMAIL PROTECTED]> writes: >> Tom Lane wrote: >>> A possible objection to this plan is that if the column-level privileges >>> patch doesn't get in, then we're left with a useless column in >>> pg_attribute. But an always-null column doesn't cost much of anything, >>> and we know that sooner or later we will support per-column ACLs: >>> they are clearly useful as well as required by spec. So any >>> inefficiency would only be transient anyway. > >> Right. I don't see this objection holding much water as column privs are >> something that many in the community would like to see. If Stephen's >> patch doesn't get in, it is likely it would (or a derivative there of) >> within the 8.5 release cycle. If anything it just provides a stepping >> stone. I see nothing wrong with that. > > Yah. However, I started to look at doing this and immediately hit a > stumbling block: we need a representation in pg_depend for a column's > default expression (as distinct from the column itself). Currently > this consists of classid = OID of pg_attrdef, objid = OID of the > default's row in pg_attrdef; both of which would disappear if we > get rid of pg_attrdef as an actual catalog. > > I can think of a way around that: represent a default expression using > classid = OID of pg_attribute, objid = OID of table, objsubid = column > attnum. This is distinct from the column itself, which is represented > with classid = OID of pg_class. It seems pretty ugly and potentially > confusing though. Also there would be a compatibility issue for clients > that examine pg_depend. Is it ugly enough to scuttle the whole concept > of merging pg_attrdef into pg_attribute? > > regards, tom lane > > -- > Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-hackers > -- Dave Page EnterpriseDB UK: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal: move column defaults into pg_attribute along with attacl
"Joshua D. Drake" <[EMAIL PROTECTED]> writes: > Tom Lane wrote: >> A possible objection to this plan is that if the column-level privileges >> patch doesn't get in, then we're left with a useless column in >> pg_attribute. But an always-null column doesn't cost much of anything, >> and we know that sooner or later we will support per-column ACLs: >> they are clearly useful as well as required by spec. So any >> inefficiency would only be transient anyway. > Right. I don't see this objection holding much water as column privs are > something that many in the community would like to see. If Stephen's > patch doesn't get in, it is likely it would (or a derivative there of) > within the 8.5 release cycle. If anything it just provides a stepping > stone. I see nothing wrong with that. Yah. However, I started to look at doing this and immediately hit a stumbling block: we need a representation in pg_depend for a column's default expression (as distinct from the column itself). Currently this consists of classid = OID of pg_attrdef, objid = OID of the default's row in pg_attrdef; both of which would disappear if we get rid of pg_attrdef as an actual catalog. I can think of a way around that: represent a default expression using classid = OID of pg_attribute, objid = OID of table, objsubid = column attnum. This is distinct from the column itself, which is represented with classid = OID of pg_class. It seems pretty ugly and potentially confusing though. Also there would be a compatibility issue for clients that examine pg_depend. Is it ugly enough to scuttle the whole concept of merging pg_attrdef into pg_attribute? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [patch] fix dblink security hole
Marko Kreen wrote: On 9/21/08, Tom Lane <[EMAIL PROTECTED]> wrote: Joe Conway <[EMAIL PROTECTED]> writes: Good point -- I'll look into that and post something tomorrow. How does > "requirepassword" sound for the option? It is consistent with > "requiressl" but a bit long and hard to read. Maybe "require_password"? Well, no, because it's not requiring a password. Perhaps "ignore_pgpass"? You need to ignore pg_service also. (And PGPASSWORD) Why? pg_service does not appear to support wildcards, so what is the attack vector? And on PGPASSWORD, the fine manual says the following: PGPASSWORD sets the password used if the server demands password authentication. Use of this environment variable is not recommended for security reasons (some operating systems allow non-root users to see process environment variables via ps); instead consider using the ~/.pgpass file (see Section 30.13). At the moment the only real issue I can see is .pgpass when wildcards are used for hostname:port:database. Joe -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal: move column defaults into pg_attribute along with attacl
Tom Lane wrote: A possible objection to this plan is that if the column-level privileges patch doesn't get in, then we're left with a useless column in pg_attribute. But an always-null column doesn't cost much of anything, and we know that sooner or later we will support per-column ACLs: they are clearly useful as well as required by spec. So any inefficiency would only be transient anyway. Right. I don't see this objection holding much water as column privs are something that many in the community would like to see. If Stephen's patch doesn't get in, it is likely it would (or a derivative there of) within the 8.5 release cycle. If anything it just provides a stepping stone. I see nothing wrong with that. Thoughts, objections? +1 Sincerely, Joshua D. Drake -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Assert Levels
Peter Eisentraut <[EMAIL PROTECTED]> writes: > Simon Riggs wrote: >> Well, we don't. That's why I'd suggest to do it slowly and classify >> everything as medium weight until proven otherwise. > Once you have classified all asserts, what do we do with the result? > What would be the practical impact? What would be your recommendation > about who runs with what setting? Being able to keep asserts on while doing performance stress testing was the suggested use case. I think we'd still recommend having them off in production. FWIW, my gut feeling about it is that 99% of the asserts in the backend are lightweight, ie, have no meaningful effect on performance. There are a small number that are expensive (the tests on validity of List structures come to mind, as well as what we already discussed). I don't have any more evidence for this than Simon has for his "they're mostly medium-weight" assumption, but I'd point out that by definition most of the backend code isn't performance-critical. So I think that an option to turn off a few particularly expensive asserts would be sufficient. Moreover, the more asserts you turn off, the less useful it would be to do testing of this type. I see essentially no value in a mode that turns off the majority of assertions. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Proposal: move column defaults into pg_attribute along with attacl
I had a thought while looking over the column-level privileges patch that Stephen Frost is working on. To wit, that the only reason that column default expressions are stored in a separate catalog pg_attrdef is the historical assumption in some parts of the code that pg_attribute rows are fixed-width and all-not-null. This assumption is destroyed by adding an attacl column, and so the patch has already done the legwork to get rid of the assumption. Given that, it would be a lot cleaner and more efficient to get rid of pg_attrdef and store column default expressions in a new, possibly-null column pg_attribute.attdefault. For backwards compatibility for clients, we could create a system view replacing pg_attrdef, but the backend wouldn't use it any more. Also, although the atthasdef column is redundant with checking for non-null attdefault, we should keep it: not only for compatibility, but because it would be accessible in the fixed-width subset of pg_attribute rows that will be kept in tuple descriptors, and so it could save a syscache lookup in some places. If that idea seems sane to people, what I would like to do is grab the parts of the column-level privileges patch that relate to allowing pg_attribute to contain null columns, and apply a patch that gets rid of pg_attrdef and adds two columns attdefault and attacl to pg_attribute. For the moment attacl would remain unused and always null. This would eliminate a fair amount of the nuisance diffs that Stephen is currently carrying and allow him to focus on the mechanics of doing something useful with per-column ACLs. Adding both columns at the same time should eliminate most of the merge problem he'd otherwise have from needing to touch pg_attribute.h and pg_class.h again. A possible objection to this plan is that if the column-level privileges patch doesn't get in, then we're left with a useless column in pg_attribute. But an always-null column doesn't cost much of anything, and we know that sooner or later we will support per-column ACLs: they are clearly useful as well as required by spec. So any inefficiency would only be transient anyway. Thoughts, objections? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Assert Levels
Simon Riggs wrote: Well, we don't. That's why I'd suggest to do it slowly and classify everything as medium weight until proven otherwise. Once you have classified all asserts, what do we do with the result? What would be the practical impact? What would be your recommendation about who runs with what setting? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Toasted table not deleted when no out of line columns left
Zoltan Boszormenyi <[EMAIL PROTECTED]> writes: > we came across a database where a table had a toasted table, > keeping huge amounts of disk space allocated. However, > the table's current definition didn't explain why there was > a toasted table. Then upon some experiments, it struck me. > There _was_ a toasted field but as the schema was modified, > the fields was dropped, leaving only inline stored fields. > VACUUM [FULL] [ANALYZE] didn't cleaned up the space > that was used by the toasted table. My tests were done on 8.3.3. This is not a bug; it is operating as designed. Observe the statement in the NOTES section of the ALTER TABLE page: The DROP COLUMN form does not physically remove the column, but simply makes it invisible to SQL operations. Subsequent insert and update operations in the table will store a null value for the column. Thus, dropping a column is quick but it will not immediately reduce the on-disk size of your table, as the space occupied by the dropped column is not reclaimed. The space will be reclaimed over time as existing rows are updated. ... and it goes on to point out how to force immediate space reclamation if you need that. These statements apply independently of whether any particular value is toasted or not. The reason for this choice is that reclaiming the space immediately would turn DROP COLUMN from a quick operation into a slow one, as it would have to grovel over every row of the table looking for TOAST pointers. > Judging from that, the toasted table > cleanup may be part of ALTER TABLE DROP COLUMN. That would only help if you were dropping the last potentially-toastable column of a table. And implementing it would require introducing weird corner cases into the tuple toaster, because it might now come across TOAST pointers that point to a no-longer-existent table, and have to consider that to be a no-op instead of an error condition. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Toasted table not deleted when no out of line columns left
Hi, we came across a database where a table had a toasted table, keeping huge amounts of disk space allocated. However, the table's current definition didn't explain why there was a toasted table. Then upon some experiments, it struck me. There _was_ a toasted field but as the schema was modified, the fields was dropped, leaving only inline stored fields. VACUUM [FULL] [ANALYZE] didn't cleaned up the space that was used by the toasted table. My tests were done on 8.3.3. As every statements that reference a table puts a lock on the pg_class record, ALTER TABLE cannot progress until all locks are gone, i.e. the transactions referencing the table finished. It's true vice-versa, ALTER TABLE blocks every transactions that may reference the table. Judging from that, the toasted table cleanup may be part of ALTER TABLE DROP COLUMN. Best regards, Zoltán Böszörményi -- -- Zoltán Böszörményi Cybertec Schönig & Schönig GmbH http://www.postgresql.at/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Foreign key constraint for array-field?
On Sun, 2008-09-21 at 04:38 +0400, Dmitry Koterov wrote: > Is it possible to create a foreign key constraint for ALL elements of > an array field? > > CREATE TABLE a(id INTEGER); > CREATE TABLE b(id INTEGER, a_ids INTEGER[]); > > Field b.a_ids contains a list of ID's of "a" table. I want to ensure > that each element in b.a_ids exists in a in any time. Is it possible > to create an automatic foreign key? No, its not possible. Need a trigger. I think we should support it though. If we extend the relational model with arrays then it would be sensible if we support this aspect as well. Implementation would be fairly straightforward. ri_triggers currently assumes a non-array value is being checked, but that could be changed to IN(array). Multi-column keys with arrays sound confusing though. -- Simon Riggs www.2ndQuadrant.com PostgreSQL Training, Services and Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [patch] fix dblink security hole
On 9/21/08, Tom Lane <[EMAIL PROTECTED]> wrote: > Joe Conway <[EMAIL PROTECTED]> writes: > > Good point -- I'll look into that and post something tomorrow. How does > > "requirepassword" sound for the option? It is consistent with > > "requiressl" but a bit long and hard to read. Maybe "require_password"? > > > Well, no, because it's not requiring a password. > > Perhaps "ignore_pgpass"? You need to ignore pg_service also. (And PGPASSWORD) -- marko -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers