Re: [HACKERS] Catalog Security WAS: Views, views, views: Summary
Tom mentioned that he had not had these security concerns raised before. From my point of view I just have no idea about the level of information offered to any given user and am scared to run PostgreSQL in an ISP shared environment because of it. I am sure I can secure people from connecting to a db by refusing them access in pg_hba.conf. But I'm unsure of exactly what that buys me, and what is doesn't. It's certainly also a concern of mine that any given use can see every table in the database. I see that as a definite problem and just assumed it was already on the radar and something that was planned to be fixed. It astounds me that the claim is that such security is impossible. It bothers me a great deal that I can't control very easily what a given user can see when they connect over ODBC or via phppgadmin in terms of schemas, tables and columns. I fixed this in application code in phppgadmin but that's clearly insufficient since it doesn't do anything for the other access methods. Modifiying phpPgAdmin is useless - people can query the catalogs manually. Hackers - we get an email about information hiding in shared postgresql/phppgadmin installations at least once a fortnight :) Chris ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Catalog Security WAS: Views, views, views: Summary
* Christopher Kings-Lynne ([EMAIL PROTECTED]) wrote: It bothers me a great deal that I can't control very easily what a given user can see when they connect over ODBC or via phppgadmin in terms of schemas, tables and columns. I fixed this in application code in phppgadmin but that's clearly insufficient since it doesn't do anything for the other access methods. Modifiying phpPgAdmin is useless - people can query the catalogs manually. It's not entirely *useless*; it's just not a proper fix for the security issue, I'll grant you that. Personally I found the hack that I did pretty useful since most of my users aren't likely to go sniffing through the catalog and it was a temporary workaround for the complaints until there's a proper fix. Hackers - we get an email about information hiding in shared postgresql/phppgadmin installations at least once a fortnight :) I agree with this- it needs to be dealt with and fixed already, once and for all. Thanks, Stephen signature.asc Description: Digital signature
Re: [HACKERS] Catalog Security WAS: Views, views, views: Summary
On Sat, May 14, 2005 at 08:55:17AM -0400, Stephen Frost wrote: * Christopher Kings-Lynne ([EMAIL PROTECTED]) wrote: It bothers me a great deal that I can't control very easily what a given user can see when they connect over ODBC or via phppgadmin in terms of schemas, tables and columns. I fixed this in application code in phppgadmin but that's clearly insufficient since it doesn't do anything for the other access methods. Hackers - we get an email about information hiding in shared postgresql/phppgadmin installations at least once a fortnight :) I agree with this- it needs to be dealt with and fixed already, once and for all. Given that the newsysviews all base visibility on granted permissions, would they do the job for you? -- Jim C. Nasby, Database Consultant [EMAIL PROTECTED] Give your computer some brain candy! www.distributed.net Team #1828 Windows: Where do you want to go today? Linux: Where do you want to go tomorrow? FreeBSD: Are you guys coming, or what? ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Catalog Security WAS: Views, views, views: Summary
* Jim C. Nasby ([EMAIL PROTECTED]) wrote: On Sat, May 14, 2005 at 08:55:17AM -0400, Stephen Frost wrote: * Christopher Kings-Lynne ([EMAIL PROTECTED]) wrote: Hackers - we get an email about information hiding in shared postgresql/phppgadmin installations at least once a fortnight :) I agree with this- it needs to be dealt with and fixed already, once and for all. Given that the newsysviews all base visibility on granted permissions, would they do the job for you? From what I've seen of them, yes, I believe they're exactly what I'm looking for. Of course, I'd really like to have them in core and have client applications updated to use them (assuming they need to be changed, which I'm guessing they would), etc. Unfortunately it's a bit too late to change what I'm about to put into production to the newsysviews (not 100% sure they're entirely ready yet either) but I'll set them up on some of my development machines and play around with them some more. Here's to hopeing they're in 8.1... Thanks, Stephen signature.asc Description: Digital signature
Re: [HACKERS] Server instrumentation for 8.1
On Thu, May 12, 2005 at 10:39:14AM -0400, Tom Lane wrote: Magnus Hagander [EMAIL PROTECTED] writes: Another thought I had along that line was use a different signal to simply do a query cancel and set a global flag that is more or less get out when you're done with query cancel. Then if that flag is set, just close the connection and proceed as if the client dropped the connection - that has to be a well tested codepath. This is pretty much exactly what kill -TERM does today, and the point is that the code path has only been extensively tested in the context of database-wide shutdown. No one can honestly say that they are sure there are no resource leaks, locks left unreleased, stuff like that. That kind of problem wouldn't be visible after a shutdown, but it will become visible if backends are killed individually with -TERM. Now in theory there are no bugs and this'll work fine. What disturbs me is the lack of testing by anyone who knows what to look for ... Would a script/program that starts connections, runs a query, and then kills the backend repeatedly suffice? -- Jim C. Nasby, Database Consultant [EMAIL PROTECTED] Give your computer some brain candy! www.distributed.net Team #1828 Windows: Where do you want to go today? Linux: Where do you want to go tomorrow? FreeBSD: Are you guys coming, or what? ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Catalog Security WAS: Views, views, views: Summary
On Sat, May 14, 2005 at 10:00:09AM -0400, Stephen Frost wrote: * Jim C. Nasby ([EMAIL PROTECTED]) wrote: On Sat, May 14, 2005 at 08:55:17AM -0400, Stephen Frost wrote: * Christopher Kings-Lynne ([EMAIL PROTECTED]) wrote: Hackers - we get an email about information hiding in shared postgresql/phppgadmin installations at least once a fortnight :) I agree with this- it needs to be dealt with and fixed already, once and for all. Given that the newsysviews all base visibility on granted permissions, would they do the job for you? From what I've seen of them, yes, I believe they're exactly what I'm looking for. Of course, I'd really like to have them in core and have As would I. client applications updated to use them (assuming they need to be changed, which I'm guessing they would), etc. Unfortunately it's a bit too late to change what I'm about to put into production to the newsysviews (not 100% sure they're entirely ready yet either) but I'll set them up on some of my development machines and play around with them some more. Here's to hopeing they're in 8.1... Your feedback would be most welcome. -- Jim C. Nasby, Database Consultant [EMAIL PROTECTED] Give your computer some brain candy! www.distributed.net Team #1828 Windows: Where do you want to go today? Linux: Where do you want to go tomorrow? FreeBSD: Are you guys coming, or what? ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Server instrumentation for 8.1
Jim C. Nasby wrote: On Thu, May 12, 2005 at 10:39:14AM -0400, Tom Lane wrote: Magnus Hagander [EMAIL PROTECTED] writes: Another thought I had along that line was use a different signal to simply do a query cancel and set a global flag that is more or less get out when you're done with query cancel. Then if that flag is set, just close the connection and proceed as if the client dropped the connection - that has to be a well tested codepath. This is pretty much exactly what kill -TERM does today, and the point is that the code path has only been extensively tested in the context of database-wide shutdown. No one can honestly say that they are sure there are no resource leaks, locks left unreleased, stuff like that. That kind of problem wouldn't be visible after a shutdown, but it will become visible if backends are killed individually with -TERM. Now in theory there are no bugs and this'll work fine. What disturbs me is the lack of testing by anyone who knows what to look for ... Would a script/program that starts connections, runs a query, and then kills the backend repeatedly suffice? Incidentally, if there are serious worries about it, testing would be a *really* good thing ... it's more or less officially sanctioned, since TERM is on the list of signals supported by pg_ctl's kill mode. cheers andrew ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] Server instrumentation for 8.1
Tom Lane wrote: Magnus Hagander [EMAIL PROTECTED] writes: Another thought I had along that line was use a different signal to simply do a query cancel and set a global flag that is more or less get out when you're done with query cancel. Then if that flag is set, just close the connection and proceed as if the client dropped the connection - that has to be a well tested codepath. This is pretty much exactly what kill -TERM does today, and the point is that the code path has only been extensively tested in the context of database-wide shutdown. No one can honestly say that they are sure there are no resource leaks, locks left unreleased, stuff like that. That kind of problem wouldn't be visible after a shutdown, but it will become visible if backends are killed individually with -TERM. Now in theory there are no bugs and this'll work fine. What disturbs me is the lack of testing by anyone who knows what to look for ... Right now the way we do cancel is to catch a signal from the postmaster, set a flag, then check it later at a safe point to decide if we should cancel the query. It seems any code that would allow backends to exit is going to have to use the same logic for safety. I don't see how stress testing is going to ever be sure to catch all problems. Can't we have a signal that does a query cancel, does the normal cancel cleanup, then exits rather than asking for another query? Is that what is already being talked about? -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Server instrumentation for 8.1
Bruce Momjian pgman@candle.pha.pa.us writes: Can't we have a signal that does a query cancel, does the normal cancel cleanup, then exits rather than asking for another query? That *is* what we have. I give up trying to explain myself, since it's obvious that I'm not getting through to anyone. Commit the darn thing. I take no responsibility for it and will not investigate any problems. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Best way to scan on-disk bitmaps
Tom Lane wrote: Victor Y. Yegorov [EMAIL PROTECTED] writes: If I have on-disk bitmap ON (a, b, c) will the planner pick an index scan for WHERE a = 42 AND b = 'foo' (i.e. only part of the index attributes are involved)? Any modifications needed to achieve this functionality? Hmm. That particular case will work, but the planner believes that only consecutive columns in the index are usable --- that is, if you have quals for a and c but not for b, it will think that the condition for c isn't usable with the index. This is true for btree and gist indexes, so I suppose we'd need to introduce a pg_am column that tells what to do. We do have a TODO for this: * Use index to restrict rows returned by multi-key index when used with non-consecutive keys to reduce heap accesses For an index on col1,col2,col3, and a WHERE clause of col1 = 5 and col3 = 9, spin though the index checking for col1 and col3 matches, rather than just col1; also called skip-scanning. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[HACKERS] alternate regression dbs?
I meant to mention this during previous discussion. Currently the pg_regress script does dbname=regression and then does everything in terms of $dbname. Would there be any value in providing a --dbname=foo parameter so that different regression sets could use their own db? One virtue at least might be that we would not drop the core regression db all the time - having it around can be useful, I think. cheers andrew ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] alternate regression dbs?
Andrew Dunstan [EMAIL PROTECTED] writes: Currently the pg_regress script does dbname=regression and then does everything in terms of $dbname. Would there be any value in providing a --dbname=foo parameter so that different regression sets could use their own db? One virtue at least might be that we would not drop the core regression db all the time - having it around can be useful, I think. I'd be in favor of using three such DBs, one for core, PLs, and contrib. (More than that seems like it would clutter the disk a lot.) But I do use the standard regression DB as a handy testbed for a lot of stuff, and it has bothered me in the past that the contrib installcheck wipes it out. Another point in the same general area: it would probably not be hard to support make check as well as make installcheck for the PLs. (The reason it's hard for contrib is that make install doesn't install contrib ... but it does install the PLs.) Is it worth doing it though? The easy implementation would require building a temp install tree for each PL, which seems mighty slow and disk-space-hungry. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] alternate regression dbs?
Tom Lane wrote: Andrew Dunstan [EMAIL PROTECTED] writes: Currently the pg_regress script does dbname=regression and then does everything in terms of $dbname. Would there be any value in providing a --dbname=foo parameter so that different regression sets could use their own db? One virtue at least might be that we would not drop the core regression db all the time - having it around can be useful, I think. I'd be in favor of using three such DBs, one for core, PLs, and contrib. (More than that seems like it would clutter the disk a lot.) But I do use the standard regression DB as a handy testbed for a lot of stuff, and it has bothered me in the past that the contrib installcheck wipes it out. I agree completely, will work on that. Another point in the same general area: it would probably not be hard to support make check as well as make installcheck for the PLs. (The reason it's hard for contrib is that make install doesn't install contrib ... but it does install the PLs.) Is it worth doing it though? The easy implementation would require building a temp install tree for each PL, which seems mighty slow and disk-space-hungry. yes, way too much work if done as a separate run. The only way it would make sense to me would be if we integrated them into the core check run somehow. But I very much doubt it is worth it. cheers andrew ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] alternate regression dbs?
Andrew Dunstan [EMAIL PROTECTED] writes: Tom Lane wrote: The easy implementation would require building a temp install tree for each PL, which seems mighty slow and disk-space-hungry. yes, way too much work if done as a separate run. The only way it would make sense to me would be if we integrated them into the core check run somehow. But I very much doubt it is worth it. Yeah. I was seriously thinking of proposing that, until I realized that putting knowledge of the available optional PLs under src/test/regress is probably exactly what we don't want to do, given that there are likely to be more and more of them. We really want that knowledge localized in src/pl. Perhaps src/pl/Makefile could be taught to implement make check (and make installcheck for that matter) at its own level, and run the tests for all the configured PLs using only one installation step. But at the moment it seems more trouble than it's worth. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] alternate regression dbs?
Andrew Dunstan [EMAIL PROTECTED] writes: Try attached ... season to taste. The bulk of it is changes for dblink which has the dbname hardcoded. Joe, any objections here? regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] alternate regression dbs?
Tom Lane wrote: Andrew Dunstan [EMAIL PROTECTED] writes: Try attached ... season to taste. The bulk of it is changes for dblink which has the dbname hardcoded. Joe, any objections here? Haven't been able to keep up with the lists at all for the past few days, but I'll take a look later today. Joe ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq