Re: [HACKERS] What goes into the security doc?
On 22 Jan 2003 at 13:29, Christopher Kings-Lynne wrote: > Recommend always running "initdb -W" and setting all pg_hba entries to md5. Thanks. I also encountered this item on IRC: [09:26] Guys, is there a problem with using /bin/true of /bin/false as the shell of the postgres user? The docs only says "adduser postgres" , witch will give postgres a nice shell. [09:27] I'm asking because the guys from Gentoo (thats a distro FWIW), want to use either /bin/false of /bin/true as postgres' shell. [09:27] fede2: it means you won't be able to become the postgres user to run commands. [09:27] ... to run SHELL commands. [09:29] dvl: Aldo it's not the same, one could use "su -c foo postgres" to workarround it. [09:30] dvl: I was wondering if it had an even heavier reason, besides that. [09:34] fede2: tha manpage of su says, that -c args is treated by the login shell ! [09:35] mmc_: Hmm.. true. That makes it a heavy enough reason. Thanks. [09:35] * fede2 departs -- Dan Langille : http://www.langille.org/ ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] What goes into the security doc?
man su says (on Linux): -s, --shell=SHELL run SHELL if /etc/shells allows it Illustration: [adunsta:adunsta]$ su -s /bin/tcsh - -c 'ps -f $$' Password: UIDPID PPID C STIME TTY STAT TIME CMD root 10682 10681 0 10:34 pts/0S 0:00 -tcsh -c ps -f $$ [adunsta:adunsta]$ So setting /bin/true as the login shell prevents real logins but doesn't prevent running commands as the user via su, even from a login shell. andrew - Original Message - From: "Dan Langille" <[EMAIL PROTECTED]> To: "Christopher Kings-Lynne" <[EMAIL PROTECTED]> Cc: <[EMAIL PROTECTED]> Sent: Friday, January 24, 2003 10:00 AM Subject: Re: [HACKERS] What goes into the security doc? > On 22 Jan 2003 at 13:29, Christopher Kings-Lynne wrote: > > > Recommend always running "initdb -W" and setting all pg_hba entries to md5. > > Thanks. I also encountered this item on IRC: > > [09:26] Guys, is there a problem with using /bin/true of > /bin/false as the shell of the postgres user? The docs only says > "adduser postgres" , witch will give postgres a nice shell. > [09:27] I'm asking because the guys from Gentoo (thats a > distro FWIW), want to use either /bin/false of /bin/true as postgres' > shell. > [09:27] fede2: it means you won't be able to become the > postgres user to run commands. > [09:27] ... to run SHELL commands. > [09:29] dvl: Aldo it's not the same, one could use "su -c foo > postgres" to workarround it. > [09:30] dvl: I was wondering if it had an even heavier > reason, besides that. > [09:34] fede2: tha manpage of su says, that -c args is treated > by the login shell ! > [09:35] mmc_: Hmm.. true. That makes it a heavy enough > reason. Thanks. > [09:35] * fede2 departs > -- > Dan Langille : http://www.langille.org/ > > > ---(end of broadcast)--- > TIP 6: Have you searched our list archives? > > http://archives.postgresql.org ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] Client interfaces documentation
Something that just occurred to me: should the documentation tree still contain full documentation for the various client interfaces, now that they have been unbundled? If so, I'd very much like to see the part about libpq++ being "the" C++ interface changed to mention libpqxx as a replacement. Jeroen ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[HACKERS] JDBC drivers and streaming content
I'm about to start implemention streaming of queries to the server in the pgsql jdbc drivers when PreparedStatement is used with setBinaryStream... but before I get started, since I've never contributed before, I wanted to run it by everyone. I'm planning on making the following changes: 1. In QueryExecutor.sendQuery, if an object in m_binds is a java.io.Reader, then instead of calling toString and sending that, I'll read and send the contents in 16K increments. 2. In AbstractJdbc1Statement.setBinaryStream, instead of delegating to setBytes, I'll call bind directly, but send a custom java.io.Reader subclass (which adds the quotes and properly escapes the bytes and such) as the object. A couple questions: - There are a few constants that could probably be tuned for performance (ie, the size of the buffer used for streaming, and a threshold to avoid the streaming overhead for very short streams). Is there a fairly standard way to handle this stuff in the JDBC drivers? Should it be made configurable to the user? Read from a properties file? Stored in any specific class as a constant? - It seems to be quite a pain that org.postgresql.core.Encoding works only with String, and can't copy into a provided char[] -- this will mean creating a large number of String objects during the streaming. I could fix this easily with java.nio.CharsetEncoder, but it would make the code dependent on JDK 1.4. Not desired? - If there's a general dislike for runtime testing of object classes among the developer community, then I could, instead of special-casing Reader in sendQuery, wrap *all* PreparedStatement parameters in an interface that has a getReader method, and provide a default non-streaming implementation that uses StringReader. This is more intrusive, but I'll go whichever way makes it more likely for the patch to be committed. - Am I missing anything? Is this harder than it seems? Seems like someone would have done it already... -- www.designacourse.com The Easiest Way to Train Anyone... Anywhere. Chris Smith - Lead Software Developer/Technical Trainer MindIQ Corporation ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [HACKERS] Suggestion for aggregate function
Greg Stark <[EMAIL PROTECTED]> writes: > Manfred Koizar <[EMAIL PROTECTED]> writes: > > > Greg, we already have this feature, just the syntax is a bit different :-) > > > >SELECT DISTINCT ON (item_id) item_id, > > price AS lowest_price, > > store_id AS lowest_price_store > > FROM items_for_sale > > WHERE item_category = ? > > ORDER BY item_id, price > > Neat! I hadn't seen this. Ok, so I still think DISTINCT ON is the neatest thing since sliced bread. But it strikes me as a bit of an odd syntax. It's very similar to GROUP BY except where all the fields are implicitly aggregated using a peculiar aggregate function that grabs the first value according to the order by expression. I'm using this already for lots of queries, it's very handy. But I'm finding it awkward in one situation -- when I also want other aggregate values other than the first value according to the sort. Consider the above query if I also wanted to know the maximum and average prices per item. Along with the store that had the maximum and minimum prices and the total number of stores that stock the item. With DISTINCT ON I would have to do two queries to get the maximum and minimum along with the relevant stores, and then do a third query with GROUP BY to get the average and total number of stores. What would be useful is something like SELECT item_id, first(price) as min_price, first(store_id) as min_store, avg(price) as avg_price, last(price) as max_price, last(store_id) as min_store, count(distinct store_id) as num_stores FROM (SELECT * FROM items_for_sale ORDER BY item_id, store_id) GROUP BY store_id This gives the benefits of DISTINCT ON but makes it easier to combine with GROUP BY. -- greg ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Suggestion for aggregate function
Greg Stark <[EMAIL PROTECTED]> writes: > What would be useful is something like > SELECT item_id, >first(price) as min_price, first(store_id) as min_store, >avg(price) as avg_price, >last(price) as max_price, last(store_id) as min_store, >count(distinct store_id) as num_stores > FROM (SELECT * FROM items_for_sale ORDER BY item_id, store_id) > GROUP BY store_id Write it yourself --- both first() and last() are trivial to code as user-defined aggregates. regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] Client interfaces documentation
"Jeroen T. Vermeulen" <[EMAIL PROTECTED]> writes: > Something that just occurred to me: should the documentation tree still > contain full documentation for the various client interfaces, now that > they have been unbundled? No, and it doesn't AFAICS ... only the still-bundled client libs are mentioned in http://www.ca.postgresql.org/users-lounge/docs/7.3/postgres/programmer-client.html regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Options for growth
On Thu, Jan 16, 2003 at 12:23:52PM -0500, Neil Conway wrote: > > The estimates I've heard from a couple parties are that PostgreSQL tends > to scale well up to 4 CPUs. I've been meaning to take a look at > improving that, but I haven't had a chance yet... I can definitely tell you that Postgres scales _fine_ beyond 4 processors. Indeed, we have found under some loads that 4 processors is not enough; but when we put it into an 8- or more-way box, it is much faster. That's on Solaris, though, which is generally very good at handling greater-than-4 CPUs. That's why Solaris is a good platform for us, even though its fork() times rot. > think the cost of subsidizing some of that development would be a > fraction of the license fees you'll end up paying Oracle over the > years... And it's worth pointing out what those ORAC licenses really cost: it might be as little as the savings of a single year. By the way ORAC may not be _quite_ as bulletproof as it seems. It shares file areas, and there are rumours of locking troubles that people trip over. Nothing they'll share with you, of course: the license forbids as much. But if you ask someone over the top of a glass, he or she might tell you about it. A -- Andrew Sullivan 204-4141 Yonge Street Liberty RMS Toronto, Ontario Canada <[EMAIL PROTECTED]> M2P 2A8 +1 416 646 3304 x110 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[HACKERS] docbook and postgresql
Hi folks I'm trying to build the documentation for pgsql (so that I can change it for the stuff we are building) and I'm having trouble finding the necessary docbook stuff. I looked at: http://developer.postgresql.org/docs/postgres/doc-build.html checking for onsgmls... onsgmls checking for openjade... openjade checking for DocBook V3.1... yes checking for DocBook stylesheets... /usr/share/sgml/docbook/ checking for sgmlspl... sgmlspl I set my DOCBOOKSTYLE to /usr/share/sgml/docbook However, I don't seem to have anything that looks like nwalsh-modular in my system (the example in the above web page). As a result I get this: sailesh:datafix>~/work/postgres/doc/src/sgml: cd sailesh:datafix>~: cd work/postgres/ sailesh:datafix>~/work/postgres: cd doc/src/sgml sailesh:datafix>~/work/postgres/doc/src/sgml: gmake admin.html /usr/bin/perl /usr/share/sgml/docbook//bin/collateindex.pl -f -g -t 'Index' -o bookindex.sgml -N Can't open perl script "/usr/share/sgml/docbook//bin/collateindex.pl": No such file or directory gmake: *** [bookindex.sgml] Error 2 sailesh:datafix>~/work/postgres/doc/src/sgml: So, sorry for the newbie question, but if somebody could point me to what exactly I need to install, given that configure thinks that I do have docbook-3.1, I'd be obliged. Thanks ! -- Pip-pip Sailesh http://www.cs.berkeley.edu/~sailesh ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[HACKERS] PSQL and backward compatibility
I've started playing with a structure based on the description in this message: http://groups.google.com/groups?hl=en&lr=lang_en&ie=UTF-8&oe=UTF-8&safe=off&selm=1043257402.83856.112.camel%40jester&rnum=21 Basically, it consists of a very simple schema to tell PSQL what \? commands are available, and the query to use to fetch the information. - User types \d. Psql (as a last resort after the checks in command.c) will query the database to see if the \d exists with the number of arguments the user has supplied. - If it exists (regular expression match of what the user typed as against column cmd_expression), it pulls out the SQL, prepares it via PREPARE, EXECUTES it, and displays the results. See schema and simple psql.patch attached. Caching, etc. could be done to speed things up -- not to mention keeping the plans around, but this was a quick hack. 2 sample commands are included. \dqtest (prints out a row of junk), \dqdb (prints db listing -- like \l), and \dqdb (prints db listing -- but with a LIKE match on the dbname). Thoughts or remarks? It allows all versions of psql using that table to pick up available commands for the database (old psql, new db gets commands for new db) but translations for column headers, etc. will be wonky as they're still tied to psql. -- Rod Taylor <[EMAIL PROTECTED]> PGP Key: http://www.rbt.ca/rbtpub.asc ? .deps ? .describe.c.swp ? psql Index: command.c === RCS file: /projects/cvsroot/pgsql-server/src/bin/psql/command.c,v retrieving revision 1.88 diff -c -r1.88 command.c *** command.c 2003/01/10 21:57:44 1.88 --- command.c 2003/01/25 01:43:38 *** *** 392,398 break; default: ! status = CMD_UNKNOWN; } if (pattern) --- 392,401 break; default: ! success = describeUnmatched(cmd, pattern); ! ! if (!success) ! status = CMD_UNKNOWN; } if (pattern) Index: describe.c === RCS file: /projects/cvsroot/pgsql-server/src/bin/psql/describe.c,v retrieving revision 1.74 diff -c -r1.74 describe.c *** describe.c 2003/01/07 20:56:06 1.74 --- describe.c 2003/01/25 01:44:09 *** *** 54,60 --- 54,165 return tmp; } + /* + * Checks the database for instructions on how to deal with any unmatched commands + * + * Returns true if it found and successfully processed the command. + */ + bool + describeUnmatched(const char *cmd, const char *pattern) + { + PQExpBufferData buf; + PGresult *res; + printQueryOpt myopt = pset.popt; + char esccmd[strlen(cmd) * 2 + 1]; + char escpattern[strlen(cmd) * 2 + 1]; + int nargs = (pattern ? 1 : 0); + char *fTitle; + char *fTabQuery; + char *fInfoQuery; + + /* Clean up the input data */ + PQescapeString(esccmd, cmd, strlen(cmd)); + + if (pattern) + PQescapeString(escpattern, pattern, strlen(pattern)); + + /* Query the DB to see if there is a command matching the request */ + initPQExpBuffer(&buf); + + printfPQExpBuffer(&buf, + "SELECT table_title, table_query, info_query +" + "FROM pgtools.psqlcommands " + "WHERE nargs = '%d' AND '%s' ~ +cmd_expression " + "ORDER BY match_order LIMIT 1", + nargs, cmd +); + + res = PSQLexec(buf.data, false); + termPQExpBuffer(&buf); + if (!res) + return false; + + if (!PQntuples(res)) + return false; + + fTitle = PQgetvalue(res, 0, 0); + fTabQuery = PQgetvalue(res, 0, 1); + fInfoQuery = PQgetvalue(res, 0, 2); + + PQclear(res); + + /* Prepare queries */ + printfPQExpBuffer(&buf, "PREPARE pg_psql "); + + if (nargs > 0) + { + int i; + appendPQExpBuffer(&buf, "("); + + for (i = 0; i < nargs - 1; i++) + appendPQExpBuffer(&buf, "text,"); + + appendPQExpBuffer(&buf, "text)"); + } + appendPQExpBuffer(&buf, "AS %s", fTabQuery); + + res = PSQLexec(buf.data, false); + termPQExpBuffer(&buf); + if (!res) + return false; + + /* Run queries, with arguments (cmd / pattern) */ + printfPQExpBuffer(&buf, "EXECUTE pg_psql"); + + if (nargs > 0) + { + int
Re: Windows Build System was: [HACKERS] Win32 port patches submitted
Curtis Faith wrote: > tom lane writes: > > You think we should drive away our existing unix developers > > in the mere hope of attracting windows developers? Sorry, it > > isn't going to happen. > > Tom brings up a good point, that changes to support Windows should not > add to the tasks of those who are doing the bulk of the work on Unixen. > > I don't think, however, that this necessarily means that having Windows > developers use Cygwin is the right solution. We need to come up with a > way to support Windows Visual C++ projects without adding work to the > other developers. [...] > IMHO, having a native port without native (read Visual C++) project > support is a a huge missed opportunity. Perhaps. On the other hand, it may be much more work than it's worth. See below. > The Visual C++ environment does not require dependency specification, it > builds dependency trees by keeping track of the #include files used > during preprocessing. > > Because of this, it should be possible to: > > A) Write a script/tool that reads the input files from Unix makefiles to > build a list of the files in PostgreSQL and place them in appropriate > projects. > > or alternately: > > B) A script/tool that recurses the directories and does the same sort of > thing. There could be some sort of mapping between directories and > projects in Visual C++. This may be necessary, but I seriously doubt it's anywhere close to sufficient. Right now, the Unix build relies on GNU autoconf to generate the Makefiles and many other files (even include files). And it doesn't just look for system-specific features and whatnot: it's the means by which features are selected at build time (such as SSL support, Kerberos support, which langauges to build runtime support for, etc.). To use it requires a Unix shell and a bunch of command line tools (e.g., sed). That's why Cygwin is required right now. Somehow *all* of that has to either be replaced, or someone has to decide which features will be built by all developers, or someone has to do all the legwork of making the Windows source tree roughly as configurable as the Unix one is. Doesn't sound like a terribly small task to me, though it might not be too bad for someone who has a lot of experience on both platforms. Since I don't have any real experience doing development under Windows, I'm not one to really say. But I thought you should at least know what you're up against. I do agree that being able to build and debug PostgreSQL using whichever tools are most commonly used amongst Windows developers would be desirable, perhaps very much so... -- Kevin Brown [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[HACKERS] WAL replay logic (was Re: [PERFORM] Mount options for Ext3?)
Kevin Brown <[EMAIL PROTECTED]> writes: > One question I have is: in the event of a crash, why not simply replay > all the transactions found in the WAL? Is the startup time of the > database that badly affected if pg_control is ignored? Interesting thought, indeed. Since we truncate the WAL after each checkpoint, seems like this approach would no more than double the time for restart. The win is it'd eliminate pg_control as a single point of failure. It's always bothered me that we have to update pg_control on every checkpoint --- it should be a write-pretty-darn-seldom file, considering how critical it is. I think we'd have to make some changes in the code for deleting old WAL segments --- right now it's not careful to delete them in order. But surely that can be coped with. OTOH, this might just move the locus for fatal failures out of pg_control and into the OS' algorithms for writing directory updates. We would have no cross-check that the set of WAL file names visible in pg_xlog is sensible or aligned with the true state of the datafile area. We'd have to take it on faith that we should replay the visible files in their name order. This might mean we'd have to abandon the current hack of recycling xlog segments by renaming them --- which would be a nontrivial performance hit. Comments anyone? > If there exists somewhere a reasonably succinct description of the > reasoning behind the current transaction management scheme (including > an analysis of the pros and cons), I'd love to read it and quit > bugging you. :-) Not that I know of. Would you care to prepare such a writeup? There is a lot of material in the source-code comments, but no coherent presentation. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Odd subselect in target list behavior WRT aggregates
"Mike Mascari" <[EMAIL PROTECTED]> writes: > Hello. I have some code which generates subselects in the target > list of a query and then aggregates the results. The code allows > the user to determine the attributes of the aggregation. If the > user chooses to aggregate on the same value twice, I get the > "Sub-SELECT" error. If the user chooses a different second > attribute of aggregation, no error occurs. Is that correct > behavior? This seems to be fixed as of 7.3, though I do not recall a previous bug report like it. However, I wonder why you are doing it like that, and not with a join: SELECT SUM(p.dstqty) as agg, date_trunc('hour', sales.active) as field1, date_trunc('day', sales.active) as field2 FROM purchases p, sales WHERE p.purchase = sales.purchase AND ... GROUP BY 2,3; The multiple-sub-select approach will require a separate probe into "sales" to retrieve each of the fields; there's no optimization across different subselects. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Odd subselect in target list behavior WRT aggregates
Tom Lane wrote: "Mike Mascari" <[EMAIL PROTECTED]> writes: Hello. I have some code which generates subselects in the target list of a query and then aggregates the results. The code allows the user to determine the attributes of the aggregation. If the user chooses to aggregate on the same value twice, I get the "Sub-SELECT" error. If the user chooses a different second attribute of aggregation, no error occurs. Is that correct behavior? This seems to be fixed as of 7.3, though I do not recall a previous bug report like it. Thanks, Tom. I should have tried the current version before posting. However, I wonder why you are doing it like that, and not with a join: SELECT SUM(p.dstqty) as agg, date_trunc('hour', sales.active) as field1, date_trunc('day', sales.active) as field2 FROM purchases p, sales WHERE p.purchase = sales.purchase AND ... GROUP BY 2,3; The multiple-sub-select approach will require a separate probe into "sales" to retrieve each of the fields; there's no optimization across different subselects. Normally, the grouping is done on two or more distantly related pieces of data: "How many widgets were sold by John on Mondays?" "What is the most popular hour for sales by quarter?" etc. So the nature of the data is such that to dynamically generate the proper joins in the FROM/WHERE clause was too complex (for me). :-) Thanks again, Mike Mascari [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Postgresql source
[EMAIL PROTECTED] kirjutas N, 23.01.2003 kell 02:29: > Can you please tell me how can I download all the source codes for > postgresql?? ftp://ftp.postgresql.org/ > -Radha Manohar > > ---(end of broadcast)--- > TIP 6: Have you searched our list archives? > > http://archives.postgresql.org -- Hannu Krosing <[EMAIL PROTECTED]> ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[HACKERS] I am back
I am back from training at the Big Nerd Ranch. We had 13 people for the week and it went very well. I have church meetings tomorrow but will try to get back to email this weekend. I also have to polish my presentations for Tokyo/Brussels because I leave on Wednesday. Also, I have Internet access in Japan so I hope to be able to keep up with email while I am there. FYI, just in the past two weeks, I have gotten requests to speak in China, Malaysia, Oregon, and Mexico! I will get as much done as possible. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly