[GENERAL] RE: [GENERAL] ‘--pwfile’ command
Tommy Flewwelling wrote: Could someone tell me if the ‘--pwfile’ command can be applied to createdb.exe and psql.exe? No, but you can use the libpq password file. See http://www.postgresql.org/docs/current/static/libpq-pgpass.html Yours, Laurenz Albe ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Syntax error in a large COPY
Il Wednesday 07 November 2007 07:54:41 Reg Me Please ha scritto: Il Wednesday 07 November 2007 01:29:44 Alvaro Herrera ha scritto: Reg Me Please wrote: Il Tuesday 06 November 2007 22:13:15 hai scritto: That's the branch and bound. Editing 29M+ lines file takes some time. But this is the way I'm going to go right now. Huh, why not just use pgloader? Becasue I never heard about it. Because it's not included into my distribution package list. And because I was trusting the core tools to work reasonably. I'll compile and use that. By the way, unsless you want to have logs at the debug level, no information has been found in the logs about the offending line(s) in the 29M+ COPY script. pgloader seems not that easy to use for a newbie like myself. Also because domentation seems too skinny. In any case each goto line, add lines, save, run cycle requires about 10 minutes on my PC. And the logs don't provide any useful detail. So, again, better logging would help in any case. -- Reg me Please ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Syntax error in a large COPY
Il Wednesday 07 November 2007 11:10:40 Dimitri Fontaine ha scritto: Le mercredi 07 novembre 2007, Reg Me Please a écrit : pgloader seems not that easy to use for a newbie like myself. Also because domentation seems too skinny. Sorry about this, writting documentation in English is not that easy when it's not one's natural language... I'll accept any comment/patch to the documentation, the aim of it being to ease users life, of course ;) http://pgloader.projects.postgresql.org/ Short story: you have to make a pgloader.conf file where you explain where is the data file and what pgloader should expect into it (csv, text, what delimiter and quotes, etc), then run pgloader -Tc pgloader.conf The -T option will TRUNCATE the configured table(s) before COPYing data into it (them). In any case each goto line, add lines, save, run cycle requires about 10 minutes on my PC. And the logs don't provide any useful detail. So, again, better logging would help in any case. pgloader would certainly give this, at first run... It seems to me it is worth the effort of reading the manual... Hi. pgloader rocks! Maybe just a complete example would suffice. Let's say a table structure, a CSV and a raw text file, a config file and the run output. Thanks. P.S. Why not including the pgloader into the main tarball? -- Reg me Please ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Syntax error in a large COPY
Le mercredi 07 novembre 2007, Reg Me Please a écrit : pgloader seems not that easy to use for a newbie like myself. Also because domentation seems too skinny. Sorry about this, writting documentation in English is not that easy when it's not one's natural language... I'll accept any comment/patch to the documentation, the aim of it being to ease users life, of course ;) http://pgloader.projects.postgresql.org/ Short story: you have to make a pgloader.conf file where you explain where is the data file and what pgloader should expect into it (csv, text, what delimiter and quotes, etc), then run pgloader -Tc pgloader.conf The -T option will TRUNCATE the configured table(s) before COPYing data into it (them). In any case each goto line, add lines, save, run cycle requires about 10 minutes on my PC. And the logs don't provide any useful detail. So, again, better logging would help in any case. pgloader would certainly give this, at first run... It seems to me it is worth the effort of reading the manual... -- dim signature.asc Description: This is a digitally signed message part.
Re: [GENERAL] Syntax error in a large COPY
Il Wednesday 07 November 2007 11:26:56 Dimitri Fontaine ha scritto: Le mercredi 07 novembre 2007, Reg Me Please a écrit : Maybe just a complete example would suffice. Let's say a table structure, a CSV and a raw text file, a config file and the run output. Do you mean something like the included examples, which I tend to also use as (regression) tests? http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgloader/pgloader/examples/ Right. I installed .deb. The man page has not been included. The examples have been copied in /usr/share/doc/pgloader/examples. The examples are OK. Regards, Thanks for yor work. -- Reg me Please ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Syntax error in a large COPY
Le mercredi 07 novembre 2007, Reg Me Please a écrit : Maybe just a complete example would suffice. Let's say a table structure, a CSV and a raw text file, a config file and the run output. Do you mean something like the included examples, which I tend to also use as (regression) tests? http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgloader/pgloader/examples/ Why not including the pgloader into the main tarball? If you're talking about the main PostgreSQL tarball... For starters it would have to be rewritten in C and be good enough to merit a contrib inclusion, and that's considering the core product would benefit of such a project in the first place... PgFoundry is a great resource for a myriad of tools making the PG-user life easier, maybe you'd be better off browsing it than expecting core to include more helper tools! Regards, -- dim signature.asc Description: This is a digitally signed message part.
[GENERAL] number errors
Hi all, When my application returns errors from database, some numbers errors is equals. Why number errors is equals? odbc driver or postgresql return this? It's run in Windows. Thanks. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Syntax error in a large COPY
Il Tuesday 06 November 2007 19:05:52 Reg Me Please ha scritto: Hi all. I'm generating an SQL script to load some million rows into a table. I'm trying to use the COPY command in order to speed the load up. At a certain point I get an error telling about a invalid input syntax for type numeric The incriminated line number is the last one (the one containing the \.). Is there a way to know which line is really malformed? Thanks. Blame on me! The problem (spotted thanks to the pgloader) was that I was using \n for null values instead of \N (capital n). As stated into the friendly documentation \n stands for new line, while \N stands for NULL. While being clearly stated, this choice is a little bit confusing, at least for newbies like myself. Thanks everyone and Dimitri Fontaine for his pgloader. -- Reg me Please ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Postgresql simple query performance question
Reid Thompson escreveu: On Tue, 2007-11-06 at 14:39 -0300, André Volpato wrote: Remember that you can always use serial fields to count a table, like: alter table foo add id serial; select id from foo order by id desc limit 1; This should return the same value than count(*), in a few msecs. -- ACV ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly not so... test=# select version(); version PostgreSQL 8.2.4 on i686-pc-linux-gnu, compiled by GCC i686-pc-linux-gnu-gcc (GCC) 4.1.2 (Gentoo 4.1.2 p1.0.1) (1 row) test=# create table serialcount(aval integer); CREATE TABLE test=# \timing Timing is on. test=# insert into serialcount values ( generate_series(1,1000)); INSERT 0 1000 Time: 42297.468 ms test=# select count(*) from serialcount; count -- 1000 (1 row) Time: 6158.188 ms test=# select count(*) from serialcount; count -- 1000 (1 row) Time: 2366.596 ms test=# select count(*) from serialcount; count -- 1000 (1 row) Time: 2090.416 ms test=# select count(*) from serialcount; count -- 1000 (1 row) Time: 2125.377 ms test=# select count(*) from serialcount; count -- 1000 (1 row) Time: 2122.584 ms test=# alter table serialcount add id serial; NOTICE: ALTER TABLE will create implicit sequence "serialcount_id_seq" for serial column "serialcount.id" ALTER TABLE Time: 51733.139 ms test=# select id from serialcount order by id desc limit 1; id -- 1000 (1 row) Time: 41088.062 ms test=# select id from serialcount order by id desc limit 1; id -- 1000 (1 row) Time: 35638.317 ms test=# vacuum analyze serialcount; VACUUM Time: 927.760 ms test=# select id from serialcount order by id desc limit 1; id -- 1000 (1 row) Time: 34281.178 ms I meant to select using an index. I´ve done the same tests here, and realized that my server is two times slower than yours: testeprog=# select version(); version - PostgreSQL 8.1.4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.3 20040412 (Red Hat Linux 3.3.3-7) (1 row) testeprog=# select count(*) from test; count -- 1000 (1 row) Time: 4116.613 ms testeprog=# alter table test add id serial; NOTICE: ALTER TABLE will create implicit sequence "test_id_seq" for serial column "test.id" ALTER TABLE Time: 90617.195 ms testeprog=# select id from test order by id desc limit 1; id -- 1000 (1 row) Time: 64856.553 ms testeprog=# create unique index itest1 on test using btree (id); CREATE INDEX Time: 29026.891 ms testeprog=# explain analyze select id from test order by id desc limit 1; QUERY PLAN Limit (cost=0.00..0.02 rows=1 width=4) (actual time=0.017..0.019 rows=1 loops=1) - Index Scan Backward using itest1 on test (cost=0.00..185954.00 rows=1000 width=4) (actual time=0.014..0.014 rows=1 loops=1) Total runtime: 0.059 ms (3 rows) @Bill: Bill Moran wrote I don't think so. What kind of accuracy do you have when rows are deleted? Also, sequences are not transactional, so rolled-back transactions will increment the sequence without actually adding rows. You are right, the serial hack should not work in most oltp cases. -- ACV
Re: [GENERAL] Postgresql simple query performance question
Il Wednesday 07 November 2007 13:08:46 André Volpato ha scritto: !DOCTYPE html PUBLIC -//W3C//DTD HTML 4.01 Transitional//EN html head meta content=text/html;charset=UTF-8 http-equiv=Content-Type title/title /head body bgcolor=#ff text=#00 Reid Thompson escreveu: Would it be possible to avoid the so-called HTML email body? -- Reg me Please ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Syntax error in a large COPY
Le mercredi 07 novembre 2007, Reg Me Please a écrit : I installed .deb. The man page has not been included. It seems the latter package on pgfoundry does have a problem here. As I have some patches waiting for a release, I'll make current CVS the 2.2.2 version and update the pgfoundry files sometime later. Thanks for reporting the issue, regards, -- dim signature.asc Description: This is a digitally signed message part.
Re: [GENERAL] Postgresql simple query performance question
On Tue, 2007-11-06 at 09:29 -0500, Bill Moran wrote: In response to SHARMILA JOTHIRAJAH [EMAIL PROTECTED]: Hi We are in the process of testing for migration of our database from Oracle to Postgresql. I hava a simple query Select count(*) from foo This is asked a lot. The quick answer is that PostgreSQL method of MVCC makes it impossible to make this query fast. Perhaps, someday, some brilliant developer will come up with an optimization, but that hasn't happened yet. What release level is being tested? It may already have happened. 8.3 is substantially faster at seq scans, so the tests should be re-run on 8.3 beta. Also, re-run the Postgres test. It should be faster the second time, even if the database server is restarted between tests. -- Simon Riggs 2ndQuadrant http://www.2ndQuadrant.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] any way for ORDER BY x to imply NULLS FIRST in 8.3?
On Tue, 2007-11-06 at 09:48 -0500, Tom Lane wrote: Essentially the same text appears in SQL2003. Any application that depends on one particular choice here is therefore broken, or at least has chosen to work with only about half of the DBMSes in the world. If an application has already made that choice then we should allow them the opportunity to work with PostgreSQL. The application may be at fault, but PostgreSQL is the loser because of that decision. The SQL Standard says that the default for this is defined by the implementation; that doesn't bar us from changing the implementation if we wish. We can do that without changing PostgreSQL's historic default. Perhaps we can have a parameter? default_null_sorting = 'last' # may alternatively be set to 'first' (or another wording/meaning.) That is what I thought you'd implemented, otherwise I would have suggested this myself way back. This new parameter would be a small change, but will make a major difference to application portability. This seems like the key to unlocking your new functionality for most people. -- Simon Riggs 2ndQuadrant http://www.2ndQuadrant.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] number errors
Hello, If you use ODBC - you should devide error from ODBC driver and errors from PostgreSQL, ODBC driver return it's own error codes, and composes error Description depending on Error Code and Text from PostgreSQL server. So you should have numbers: 1) ODBC error code - described in MSDN; 2) Native PostgreSQL error code - described in PostgreSQL manual; 3) Error description - composed by ODBC driver, based on description and error code, returned from server. João Paulo Zavanela wrote: Hi all, When my application returns errors from database, some numbers errors is equals. Why number errors is equals? odbc driver or postgresql return this? It's run in Windows. Thanks. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] prepared statements suboptimal?
Hi, I'm planning to use prepared statements of indefinite lifetime in a daemon that will execute same statements rather frequently in reply to client requests. This link: http://www.postgresql.org/docs/8.3/static/sql-prepare.html has a note on performance: In some situations, the query plan produced for a prepared statement will be inferior to the query plan that would have been chosen if the statement had been submitted and executed normally. This is because when the statement is planned and the planner attempts to determine the optimal query plan, the actual values of any parameters specified in the statement are unavailable. PostgreSQL collects statistics on the distribution of data in the table, and can use constant values in a statement to make guesses about the likely result of executing the statement. Since this data is unavailable when planning prepared statements with parameters, the chosen plan might be suboptimal. I don't understand why postgres couldn't plan this: SELECT foo.e, foo.f FROM foo WHERE pk=$1 AND b=$2 AND status='1' AND c = $3; to be later executed any slower than SELECT foo.e, foo.f FROM foo WHERE pk='abcabc' AND b='13' AND status='1' AND c = '2007-11-20 13:14:15'; Can I help it make more educated guesses? In what scenarios could prepared statements turn around and bite me, being slower than simple queries? Is this a real problem in practice? Should I refresh prepared statements from time to time? If so, how? Only by deallocating them and preparing anew? Any knob to tweak for that? Okay, enough questions :) Thank you for any insights. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] (Never?) Kill Postmaster?
Christian Schröder wrote: Alvaro Herrera wrote: Please try thread apply all bt full on gdb. The first lines where the symbols are loaded are of course identical. The output of the command is in my opinion not very helpful: I was actually hoping that it would list the running threads in the process. It would have sufficed to say info threads, but the bt full command would have given more detail in case there were more. I find it a bit strange that it tells you that there is one thread; in a simple try here, it doesn't say anything at all: (gdb) info threads (gdb) In any case I can see in the files from which symbols are loaded that some Perl stuff seem to be multithreaded ... maybe that's were the problem comes from. Hmm, after creating a plperl function it is indeed different: (gdb) info threads * 1 Thread 0x2ba8945f3e20 (LWP 3606) 0x2ba894370645 in recv () from /lib/libc.so.6 (gdb) -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] prepared statements suboptimal?
Il Wednesday 07 November 2007 12:25:46 rihad ha scritto: I don't understand why postgres couldn't plan this: SELECT foo.e, foo.f FROM foo WHERE pk=$1 AND b=$2 AND status='1' AND c = $3; to be later executed any slower than SELECT foo.e, foo.f FROM foo WHERE pk='abcabc' AND b='13' AND status='1' AND c = '2007-11-20 13:14:15'; One solution is that the planner will work when it will see the query. At that time the comparisons are all against unknown values. Try the same with dyamical SQL, that is you dynamically build by placing the current values instead of the $1, $2 and $3 placeholders. In this case the planner will see the query with all current real values. Then you execute it and compare the timings. Maybe you get better scores: there's no warranty for better performances becasue you are going to send the whole query again and again to the planner. Of course you need a plpgsql function for this. -- Reg me Please ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Exclusive Locks Taken on User Tables?
On Tue, 2007-11-06 at 15:53 -0500, Marc wrote: Ok. I'll keep looking at pg_locks. My original reason for reaching out to the list was over confusion as to when an EXCLUSIVE lock would be taken table level since the documentation says this should never happen except to some system catalogs. Is there something missing from the documentation? I feel like that would be a big clue. Here is what I'm referencing: http://www.postgresql.org/docs/8.2/interactive/explicit-locking.html OK, I see what you mean. This page: http://www.postgresql.org/docs/8.2/interactive/view-pg-locks.html references the section on titled Table-level Locks, rather than referencing the locking chapter in general. The docs don't discuss that ExclusiveLock applies to the locktype, not to the Table in all cases, which could easily be inferred from what is there. I've had that question before myself. The tuple level ExclusiveLocks you are seeing are locking only the rows; the table containing those rows will not be ExclusiveLock-ed. The docs are correct in what they say about *table-level* (i.e. relation as referred to by pg_locks) ExclusiveLocks. I'll submit a patch. -- Simon Riggs 2ndQuadrant http://www.2ndQuadrant.com ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] prepared statements suboptimal?
rihad wrote: Hi, I'm planning to use prepared statements of indefinite lifetime in a daemon that will execute same statements rather frequently in reply to client requests. This link: http://www.postgresql.org/docs/8.3/static/sql-prepare.html has a note on performance: In some situations, the query plan produced for a prepared statement will be inferior to the query plan that would have been chosen if the statement had been submitted and executed normally. This is because when the statement is planned and the planner attempts to determine the optimal query plan, the actual values of any parameters specified in the statement are unavailable. PostgreSQL collects statistics on the distribution of data in the table, and can use constant values in a statement to make guesses about the likely result of executing the statement. Since this data is unavailable when planning prepared statements with parameters, the chosen plan might be suboptimal. I don't understand why postgres couldn't plan this: SELECT foo.e, foo.f FROM foo WHERE pk=$1 AND b=$2 AND status='1' AND c = $3; to be later executed any slower than SELECT foo.e, foo.f FROM foo WHERE pk='abcabc' AND b='13' AND status='1' AND c = '2007-11-20 13:14:15'; Can I help it make more educated guesses? In what scenarios could prepared statements turn around and bite me, being slower than simple queries? Is this a real problem in practice? Should I refresh prepared statements from time to time? If so, how? Only by deallocating them and preparing anew? Any knob to tweak for that? Okay, enough questions :) Thank you for any insights. From http://www.postgresql.org/docs/8.3/static/protocol-flow.html I just read that This possible penalty is avoided when using the unnamed statement, since it is not planned until actual parameter values are available. Since I'm using Perl's DBI/pg, in postmaster's logs I can see that DBI's prepare() seems to using named prepared statements: Nov 7 15:57:46 sol postgres[1685]: [2-1] LOG: execute dbdpg_1: Nov 7 15:57:46 sol postgres[1685]: [2-2] SELECT ... is there any way to tell it to use unnamed prepared statements? I understand this is not a strictly PostgreSQL question so sorry if I'm off the topic. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] strange timezone problem
Nick Johnson [EMAIL PROTECTED] writes: I'm using PostgreSQL 8.2.3 and seeing this behaviour with timezones: select create_date from article_lead; create_date --- 2007-11-04 16:35:33.17+00 2007-11-04 04:35:36.09+00 2007-11-05 04:35:36.38+00 2007-11-05 16:35:36.67+00 (4 rows) select create_date from article_lead where create_date = '2007-11-03 17:00:00.0' and create_date ='2007-11-04 16:00:00.0'; create_date --- 2007-11-04 04:35:36.09+00 Shouldn't that second row have been in the results of the second query? Huh? Those results look perfectly sane to me. set TimeZone='America/Los_Angeles'; select create_date from article_lead; create_date --- 2007-11-04 08:35:33.17-08 2007-11-03 21:35:36.09-07 -- why 07? 2007-11-04 20:35:36.38-08 2007-11-05 08:35:36.67-08 That's correct ... as of last year, DST extends through the first Sunday in November in the USA. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] any way for ORDER BY x to imply NULLS FIRST in 8.3?
Tom Lane [EMAIL PROTECTED] writes: Simon Riggs [EMAIL PROTECTED] writes: Perhaps we can have a parameter? default_null_sorting = 'last' # may alternatively be set to 'first' Not unless it's locked down at initdb time. Otherwise flipping the value bars you from using every existing index ... including those on the system catalogs ... which were made with the other setting. Surely if we added this we would also add explicit NULLS LAST clauses to all system catalog indexes and system views and make explicitly constructed scans in the backend use NULLS LAST. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's 24x7 Postgres support! ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] The .pgpass file
Tommy Flewwelling wrote: Hello, How do I specify in the command-line to access the .pgpass file when creating a database? You don't. If the file exists and has the correct permission, createdb will read it. If the needed password is found, the connection will be done without ever prompting the user. FWIW the -W switch is useless. If createdb finds that the password is needed, it will prompt the user even if -W is not specified. If createdb finds that the password is not needed, then it won't prompt the user. (Note that none of this is actually createdb's feature. It is all done by libpq internally.) -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Temporary, In-memory Postgres DB?
On 11/7/07, Gauthier, Dave [EMAIL PROTECTED] wrote: Yes, I'm thinking of a small DB (could fit into mem). And the notion that all the data would reside in memory makes sense. But what about the metadata? And there is the question of the initdb and all the stuff it creates. That goes to disk, yes? no? It goes where you tell it to. initdb -D /mnt/ramdisk/data et voila! Another question, but first my tenuous understanding of how dbs are created, up for critique... - initdb creates (on disk) all the stuff you need to have before you createdb. - createdb creates a db (puts it in the place you designated with initdb) Or in a tablespace you've created since then - create schema can be used to create multiple schemas in a singel DB. - drop schema can be used to get rid of a schema within a db - dropdb can be used to get rid of a db that was created with createdb Q: How does one get rid of whatever gets created with initdb? Is it (gulp) just a rm -r ? yep shut down the db first, then clean out the directory. I do have access to scratch disks which, in effect, could be used as a temporary storage area. IOW, if something goes wrong, and I don't get to delete a db that was created on the scratch disk, it'll get cleaned up for me overnight. It's just a question of how long it'll take to initdb + createdb + create a db model + load. For a small db like you're talking about only a minute or two. If you have a real pg db that you don't want to scramble, then do all your initdb under a different unprivaleged system account. Just point it to a different port. What am I exposing myself to if I have dozens (maybe a hundred max) PG databases running on the same server? (v8.2.0) First off, run 8.2.5 if you can, not 8.2.0. 8.2.0 has a few nasty bugs you don't want to get bitten by. Generally, running dozens to hundreds of instances of pgsql on one machine is a bad idea. Unless you have a very good business case for it, it's better to run multiple dbs inside one instance, which is no big deal at all. I've run intranet pg machines, under pgsql 7.2 with well over 100 individual databases with no problems at all. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Syntax error in a large COPY
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 11/07/07 11:35, Andrej Ricnik-Bay wrote: On 11/8/07, Collin Kidder [EMAIL PROTECTED] wrote: [snip] No, just not everyone agrees with your viewpoint on this topic. Top posting has it's place and some of us prefer it. But they could just adhere to the law of the land, or when in Rome, ... practice instead of kicking off fuss. And with my mail client top-posting has no place. Let's just stick to good old standards. SARCASM What ever happened to I gotta do what's right for me! and I'm OK, you're OK? /SARCASM - -- Ron Johnson, Jr. Jefferson LA USA Give a man a fish, and he eats for a day. Hit him with a fish, and he goes away for good! -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFHMgG/S9HxQb37XmcRAiDmAJ9heLxbBvBSVP0duhzSfI1bvnskoACeOlWp UDS6YJV0KsYD44FMIpa54m4= =x+Xs -END PGP SIGNATURE- ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: DB on a ramdisk (was Re: [GENERAL] Temporary, In-memory Postgres DB?)
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Undo an initdb? Probably the same way you undo unlinking an SQLite database. Maybe being wrapped in my own little niche I just don't know enough about the wide world of hyperfeaturitis, but making temporary DB as a feature seems a little vague. It doesn't really take that long to create a new database (especially if it's scripted!), and it's even faster if you make the temporary DB a schema off a public database. On 11/07/07 11:27, Gauthier, Dave wrote: I understand caching. Here's the reason I'm inquiring into this line of thought... I already have a big on-disk DB with lots and lots of data, and lots of stored functions and a data model that works with DB loaders and other code that queries out data. I also have users that want all of that, except for the particular data content. They basically want to load a DB with data that's in their scratch area without polluting what's in the main DB. The cardinality of this personal, scratch data will be orders of magnitude smaller than what's in the main (could all fit in memory). And once loaded, they would be able to run all the same DB load and query tools that work on the main DB, just redirect to the small, personal DB. This would be a good app for SQLite, but SQLite can't do a lot of the things that are running in the main DB (like stored procedures). It's become clear that PG cannot do a pure in-memory DB like SQLite. It's why I initially called this a longshot and the answer to my question is probably no. But enabling something like a pure in-memory (and temporary) DB for small apps that can reap all the wonderful features of PG would make it very attractive for some users. Just something to think about for future development. One question I had earlier that I don't think got answered was how to undo an initdb. dropdb drops a DB, but how do I undo an initdb? -dave -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane Sent: Wednesday, November 07, 2007 12:05 PM To: Ron Johnson Cc: pgsql-general@postgresql.org Subject: Re: DB on a ramdisk (was Re: [GENERAL] Temporary, In-memory Postgres DB?) Ron Johnson [EMAIL PROTECTED] writes: On 11/07/07 09:58, Tom Lane wrote: Or put it on a ramdisk filesystem. But doesn't that just add more overhead and reduce the amount of memory that the OS can cache things in? It's very possibly not a win, but the kinds of people who ask this question at all do not understand the concept of caching, so I'm sure they'll be happier with a solution where the data demonstrably never hits disk ;-) A case where it could be a win is where you are thrashing the DB with heavy update load. Even if everything is cached there will be a pretty serious amount of disk write traffic, which'd possibly interfere with other system activity. - -- Ron Johnson, Jr. Jefferson LA USA Give a man a fish, and he eats for a day. Hit him with a fish, and he goes away for good! -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFHMgEJS9HxQb37XmcRApJ9AJ98fxi/RecoS+MUZimzGEk5zYP15QCg7Iz/ VtVm5BMgjWsV+71AFH8M88g= =uTCV -END PGP SIGNATURE- ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Syntax error in a large COPY
Collin Kidder wrote: I'm with Thomas. I think that, while inline posting is a good thing, bottom posting is dead stupid and wastes my time. It is far easier to follow a thread with top posting as the relevant text is right there at the top ready to be read. That sounds more like an argument to not including the original text at all. As far as I'm concerned, in-line posting *with* editing of the text is the *only* reasonable thing to do. Top-posting is lazy, arrogant, and assumes reading material on a recently read thread (so the context is fresh). It offers no advantages when reading a posting after-the-fact or taken in isolation. The reader has to first re-establish the context, which means reading the message from the bottom up. Bottom posting (w/o editing) is only *slightly* less lazy, but doesn't make the assumption that the reader is current on the context, at least. Bottom posting w/o editing forces the reader to wade through old material that isn't relevant, however, to reestablish the context. The argument based on being able to link back up through a thread to get context is a non-sequitur. If one really believes that's the case, then don't include the original text *at all* (whether top or bottom posting) [and see how many people complain about lack of context!] If there is some context that is relevant to what's being added, seeing *just that context* immediately prior to reading the new material is invaluable. [This *isn't* a bottom-posted message - it just looks like one because of the context editing!] -- Steve Wampler -- [EMAIL PROTECTED] The gods that smiled on your birth are now laughing out loud. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] any way for ORDER BY x to imply NULLS FIRST in 8.3?
On Wed, 2007-11-07 at 11:39 -0500, Tom Lane wrote: Simon Riggs [EMAIL PROTECTED] writes: On Wed, 2007-11-07 at 10:23 -0500, Tom Lane wrote: I put this in the same category as altering the identifier case-folding rules. That has much less effect on application portability, Really? Try counting the number of requests for that in the archives, vs the number of requests for this. I think you're arguing in favour of both changes, not burying my point. Most applications don't hit the case folding issue for identifiers. Certainly people have, but those are people doing things with metadata like trying to write tools that work with both. They're database savvy people who come on list and try and fix things. Almost all applications have NULLs and use ORDER BY and indexes. That doesn't mean everybody is effected by NULL sorting, but they might be and probably don't realise. I think you're right in identifying there are other issues for portability. My list would be: 1. statement level abort 2. equivalent performance of identical SQL (e.g. NOT IN) 3. case insensitive searches 4. NULL ordering 5. case folding identifiers Those differ depending upon the database. -- Simon Riggs 2ndQuadrant http://www.2ndQuadrant.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] strange timezone problem
Nick Johnson wrote: Before I open a bug on this, I wanted to do a sanity check, since there may be something I'm just not seeing. I'm using PostgreSQL 8.2.3 and seeing this behaviour with timezones: select create_date from article_lead; create_date --- 2007-11-04 16:35:33.17+00 2007-11-04 04:35:36.09+00 2007-11-05 04:35:36.38+00 2007-11-05 16:35:36.67+00 (4 rows) Note the top two are not in order (cuz you didnt 'order by create_date') select create_date from article_lead where create_date = '2007-11-03 17:00:00.0' and create_date ='2007-11-04 16:00:00.0'; create_date --- 2007-11-04 04:35:36.09+00 This one is correct, it is the second row. And the first row should not be in because its 16:00. The db looks correct to me. -Andy ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Syntax error in a large COPY
My point is: with top-posting I don't care how many lines were repeated because I don't have to scroll. Considering there is an RFC that recommends inline posting over top-posting (http://tools.ietf.org/html/rfc1855), and considering the fact that this topic has been beat to death on dozens of mailing lists and the predominant preference is _not_ for top-posting -- perhaps you should either follow the preferences of the group, or leave the group. I'm with Thomas. I think that, while inline posting is a good thing, bottom posting is dead stupid and wastes my time. It is far easier to follow a thread with top posting as the relevant text is right there at the top ready to be read. But this horse has been beat to death before... Obviously not, as it keeps coming back to life. I guess it's an undead horse? No, just not everyone agrees with your viewpoint on this topic. Top posting has it's place and some of us prefer it. Obviously I'm not doing it but it's only because of the large amount of anal retentive people on lists like this. And so... with that my view is out there. I hate bottom posting. But I for one will do it to keep the peace. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] prepared statements suboptimal?
rihad [EMAIL PROTECTED] writes: Aha, thanks for a thorough explanation. Now I understand that while looking for a way to fulfill the query postgres will try hard to pick the one requiring the least number of rows visits. I've skimmed over my queries: almost all of them make use of the primary key as the first thing in the WHERE clause (say, a username, which is the only pk in the table): shouldn't that be enough for postgres to *always* decide to scan the pk's index (since a query on a pk always returns either one or zero results)? Yeah, if there's always a PK equality constraint then the dependence on specific parameter values is much weaker, so you could probably use a prepared statement without worrying. The cases where prepared statements tend to suck usually involve either inequalities, or equalities on non-unique columns where the number of matches varies wildly for different data values. In cases like that, knowing the exact value being compared to makes a very large difference in the rowcount estimate. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] strange timezone problem
Before I open a bug on this, I wanted to do a sanity check, since there may be something I'm just not seeing. I'm using PostgreSQL 8.2.3 and seeing this behaviour with timezones: select create_date from article_lead; create_date --- 2007-11-04 16:35:33.17+00 2007-11-04 04:35:36.09+00 2007-11-05 04:35:36.38+00 2007-11-05 16:35:36.67+00 (4 rows) select create_date from article_lead where create_date = '2007-11-03 17:00:00.0' and create_date ='2007-11-04 16:00:00.0'; create_date --- 2007-11-04 04:35:36.09+00 Shouldn't that second row have been in the results of the second query? (create_date is of timestamptz type). Also noted this oddity, though it may be unrelated: set TimeZone='America/Los_Angeles'; select create_date from article_lead; create_date --- 2007-11-04 08:35:33.17-08 2007-11-03 21:35:36.09-07 -- why 07? 2007-11-04 20:35:36.38-08 2007-11-05 08:35:36.67-08 Nick ---(end of broadcast)--- TIP 6: explain analyze is your friend
DB on a ramdisk (was Re: [GENERAL] Temporary, In-memory Postgres DB?)
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 11/07/07 09:58, Tom Lane wrote: Ron Johnson [EMAIL PROTECTED] writes: On 11/07/07 09:03, Gauthier, Dave wrote: Is there such a thing as a temporary, probably in-memory, version of a Postgres DB? If you have enough RAM, and your database is small enough, the OS will eventually cache the whole thing. Or put it on a ramdisk filesystem. But doesn't that just add more overhead and reduce the amount of memory that the OS can cache things in? - -- Ron Johnson, Jr. Jefferson LA USA Give a man a fish, and he eats for a day. Hit him with a fish, and he goes away for good! -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFHMeykS9HxQb37XmcRArErAJ47+9oq1/fTZZ4AXrLnL2qGo6E29gCgqVhP DoZuWDTpWE4Rks3tjAWa0mQ= =fuHr -END PGP SIGNATURE- ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] The .pgpass file
Am Mittwoch, 7. November 2007 schrieb Tommy Flewwelling: How do I inform the complier to extract the password from the file and not the prompt the user? It does that automatically. Just omit the -W option. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] md5() sorting
On 11/7/07, Karsten Hilbert [EMAIL PROTECTED] wrote: On Wed, Nov 07, 2007 at 03:54:02PM +0100, Martijn van Oosterhout wrote: Should I be going about this sorting or hashing or detection business in another way entirely which can be done at the SQL level ? I'm wondering if you cast the md5sum as a bytea instead of text and then sort, if that would solve it simply. Along the lines of ... ORDER BY decode(md5('...'), 'hex'); ? I knew I'd ask here ;-) Maybe using digest(.., 'md5') function from pgcrypto would be better? It gives bytea immidiately. -- marko ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] The .pgpass file
Tommy Flewwelling [EMAIL PROTECTED] writes: How do I specify in the command-line to access the .pgpass file when creating a database? Huh? You don't specify anything, it's done automatically when needed. I dont want to have to include (-W): C:\postgressql\bincreatedb -U postgres p 5432 -W E UTF8 myDatabase You never have to include -W, whether you use .pgpass or not. That switch is only there for historical reasons. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Temporary, In-memory Postgres DB?
Ron Johnson [EMAIL PROTECTED] writes: On 11/07/07 09:03, Gauthier, Dave wrote: Is there such a thing as a temporary, probably in-memory, version of a Postgres DB? If you have enough RAM, and your database is small enough, the OS will eventually cache the whole thing. Or put it on a ramdisk filesystem. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] any way for ORDER BY x to imply NULLS FIRST in 8.3?
On Wed, 2007-11-07 at 16:05 +0100, Martijn van Oosterhout wrote: On Wed, Nov 07, 2007 at 02:37:41PM +, Simon Riggs wrote: Editing an application, you would be required to add the words NULLS FIRST to every single ORDER BY and every single CREATE INDEX in an application. If we know that is what people would do, why not have one parameter to do this for them? I find it hard to beleive that every single query in an application depends on the ordering of NULLs. In fact, I don't think I've even written a query that depended on a particular way of sorting NULLs. Is it really that big a deal? True, but how would you know for certain? You'd need to examine each query to be able to tell, which would take even longer. Or would you not bother, catch a few errors in test and then wait for the application to break in random ways when a NULL is added later? I guess that's what most people do, if they do convert. I'd like to remove one difficult barrier to Postgres adoption. We just need some opinions from people who *havent* converted to Postgres, which I admit is difficult cos they're not listening. Implement SQLServer and MySQL behaviour? Now we're talking about hundreds of new applications that might decide to migrate/support PostgreSQL because of our flexibility in being able to support both kinds of sorting. TBH I think long term is should be attached to each column, as it is a property of the collation (my COLLATE patch let you specify it per column). That's a great idea, but orthogonal to the discussion about migrating from other databases. No other database works like that, nor does the SQL standard, but I'll admit its sound thinking otherwise. -- Simon Riggs 2ndQuadrant http://www.2ndQuadrant.com ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] odbcng
Hello, This query works for me on Access 2003. Which versions of Access and ODBCng you have? We can communicate via [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]. I'll try to help you with any problems. Sam Mason wrote: On Tue, Nov 06, 2007 at 05:48:12PM -0300, Alvaro Herrera wrote: FYI there's another Postgres ODBC driver that is said to have better performance. https://projects.commandprompt.com/public/odbcng (Yes, my company maintains it) Are there any known issues when calling it from VB? I've got a VB (MS Access) client that uses PG as its backend and it seems to die horribly when doing any sort of query that returns a text column. This happens through either DAO or ADO; though DAO gives the error the size of a field is too long, ADO just segfaults. For example, the following code doesn't work for me. Looks like the sort of thing that that should get lots of test coverage so maybe it's something on my system. Public Sub test() Dim con As ADODB.Connection, rs As ADODB.Recordset Set con = New ADODB.Connection con.Open DSN=badgerstudy Set rs = con.Execute(SELECT 1, 'foo'::TEXT, 'bar') While Not rs.EOF rs.MoveNext Wend End Sub Thanks, Sam ---(end of broadcast)--- TIP 6: explain analyze is your friend Thanks, Andrei. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Temporary, In-memory Postgres DB?
This is a real longshot, but here goes... Is there such a thing as a temporary, probably in-memory, version of a Postgres DB? Sort of like SQLite, only with the features/function of PG? A DB like this would exist inside of, and for the duration of, a script/program that created it, then vanish when the script/program ends. Probably not, but if not, then this would be (IMO) a great addition to have, something that'd really make it distinct from MySQL . I'd use SQLite, but I want to have stored functions and other real database features that it just doesn't have. Thanks -dave
[GENERAL] The .pgpass file
Hello, How do I specify in the command-line to access the .pgpass file when creating a database? I would like to use ~/.pgpass instead of –W on the command line when creating a database (createdb) and was wondering the correct syntax. For example, I don’t want to have to include (-W): C:\postgressql\bincreatedb -U postgres –p 5432 -W –E UTF8 myDatabase and substitute it for ~/.pgpass or PGPASSFILE: C:\postgressql\bincreatedb -U postgres –p 5432 ~/.pgpass –E UTF8 myDatabase How do I inform the complier to extract the password from the file and not the prompt the user? Any suggestions would be greatly appreciated. Tommy_ _ R U Ready for Windows Live Messenger Beta 8.5? Try it today! http://entertainment.sympatico.msn.ca/WindowsLiveMessenger
Re: [GENERAL] any way for ORDER BY x to imply NULLS FIRST in 8.3?
On Wed, Nov 07, 2007 at 02:37:41PM +, Simon Riggs wrote: Editing an application, you would be required to add the words NULLS FIRST to every single ORDER BY and every single CREATE INDEX in an application. If we know that is what people would do, why not have one parameter to do this for them? I find it hard to beleive that every single query in an application depends on the ordering of NULLs. In fact, I don't think I've even written a query that depended on a particular way of sorting NULLs. Is it really that big a deal? Implement SQLServer and MySQL behaviour? Now we're talking about hundreds of new applications that might decide to migrate/support PostgreSQL because of our flexibility in being able to support both kinds of sorting. TBH I think long term is should be attached to each column, as it is a property of the collation (my COLLATE patch let you specify it per column). Have a nice day, -- Martijn van Oosterhout [EMAIL PROTECTED] http://svana.org/kleptog/ Those who make peaceful revolution impossible will make violent revolution inevitable. -- John F Kennedy signature.asc Description: Digital signature
Re: [GENERAL] any way for ORDER BY x to imply NULLS FIRST in 8.3?
On Wed, 2007-11-07 at 08:38 -0500, Bruce Momjian wrote: Simon Riggs wrote: If an application has already made that choice then we should allow them the opportunity to work with PostgreSQL. The application may be at fault, but PostgreSQL is the loser because of that decision. The SQL Standard says that the default for this is defined by the implementation; that doesn't bar us from changing the implementation if we wish. We can do that without changing PostgreSQL's historic default. Perhaps we can have a parameter? default_null_sorting = 'last' # may alternatively be set to 'first' (or another wording/meaning.) That is what I thought you'd implemented, otherwise I would have suggested this myself way back. This new parameter would be a small change, but will make a major difference to application portability. This seems like the key to unlocking your new functionality for most people. You already have that control at the SQL SELECT level so you are just avoiding typing to add the GUC parameter. My understanding is that both MySQL and MSSQL support NULLS FIRST by default, so being able to accept much of their SQL without change would be a huge win. Editing an application, you would be required to add the words NULLS FIRST to every single ORDER BY and every single CREATE INDEX in an application. If we know that is what people would do, why not have one parameter to do this for them? Now imagine you are writing an application that has to work on multiple databases. Can you realistically create a workable framework that has the SQL written in multiple different ways? That issue is the big issue preventing many off-the-shelf software vendors from supporting Postgres. Say you did decide to edit the application. As soon as you edit the SQL within an application it typically will violate any support contract in place. That's usually enough to prevent even the brave from doing this. One might argue that SQL generators such as Hibernate can automatically and easily generate the required SQL, so they don't need this. That's very nice to know we'll be able to use the new feature maybe 10-20% of the time, but what about other applications? We already have parameters of this category, for example: default_with_oids == WITH OIDS text on CREATE TABLE default_transaction_isolation... default_read_only... plus many of the other GUCs in statement behaviour section of the Server Configuration chapter. add mising from transform null equals etc http://developer.postgresql.org/pgdocs/postgres/runtime-config-compatible.html#RUNTIME-CONFIG-COMPATIBLE-CLIENTS I think we need more requests for such a feature before we add it. Almost none of the features we've added have come by request. Features get added because we see the benefit ourselves. Yes, we should discuss this more widely; I'm confident many others will see the benefit in allowing migration from other systems to happen more easily. What we have now implements SQL Standard behaviour. I think that's uninteresting for 99% of applications. I believe in standardisation, but nobody gets excited about it. There are few applications that will specify NULLS FIRST for a few queries only, actually coding that into the SQL. Implement SQLServer and MySQL behaviour? Now we're talking about hundreds of new applications that might decide to migrate/support PostgreSQL because of our flexibility in being able to support both kinds of sorting. It's going to be a short patch. -- Simon Riggs 2ndQuadrant http://www.2ndQuadrant.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] prepared statements suboptimal?
rihad [EMAIL PROTECTED] writes: I don't understand why postgres couldn't plan this: SELECT foo.e, foo.f FROM foo WHERE pk=$1 AND b=$2 AND status='1' AND c = $3; to be later executed any slower than SELECT foo.e, foo.f FROM foo WHERE pk='abcabc' AND b='13' AND status='1' AND c = '2007-11-20 13:14:15'; The reason is that without knowing the parameter values, the planner has to pick a generic plan that will hopefully not be too awful regardless of what the actual values end up being. When it has the actual values it can make much tighter estimates of the number of matching rows, and possibly choose a much better but special-purpose plan. As an example, if the available indexes are on b and c then the best query plan for the first case is probably bitmap indexscan on b. But in the second case, the planner might be able to determine (by consulting the ANALYZE stats) that there are many rows matching b='13' but very few rows with c = '2007-11-20 13:14:15', so for those specific parameter values an indexscan on c would be better. It would be folly to choose that as the generic plan, though, since on the average a one-sided inequality on c could be expected to not be very selective at all. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Npsql is much faster than ODBC ?
Alvaro Herrera wrote: Rainer Bauer wrote: Andrej Ricnik-Bay wrote: On Nov 7, 2007 2:40 PM, Rainer Bauer [EMAIL PROTECTED] wrote: That's nice to hear. But I respect licences as they are and the ODBCng driver is licenced under the GPL. That doesn't mean that you're not allowed to use it with commercial applications; it just means that you need to be happy to provide the source for it on request. Which is exactly the reason why the LGPL licence was created. So that any software can link against a library without the restrictions of the GPL. Keep in mind, though, that the ODBC driver is not linked to your app. It is only loaded on demand at run time, and can be replaced by any other ODBC driver. So AFAIU your application is shielded from GPL. IANAL of course. Neither am I. However, the GPL FAQ has an entry specially for this case: http://www.gnu.org/licenses/gpl-faq.html#NFUseGPLPlugins If the program dynamically links plug-ins, and they make function calls to each other and share data structures, we believe they form a single program, which must be treated as an extension of both the main program and the plug-ins. In order to use the GPL-covered plug-ins, the main program must be released under the GPL or a GPL-compatible free software license, and that the terms of the GPL must be followed when the main program is distributed for use with these plug-ins. The way I read this section is that linking to a GPL ODBC driver would imply that I have to release my program under a GPL (compatible) licence. This was one of the reasons why I added Postgres support to my program instead of MySQL [1]. They altered the licence for their drivers from LGPL to GPL so that you have to purchase a commercial licence. Rainer [1] In the meantime I am of course glad that I made this decision. I have not only learned a lot more about databases, but especially that Postgres is superior to MySQL ;-) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] md5() sorting
Hi, in GNUmed (wiki.gnumed.de) we use schema hashing to detect whether a database can safely be upgraded or used by a client. The general procedure is this: - generate a line-by-line representation of the database objects in the format schema.table.column::data type from the information catalog - sort those lines by their md5() hash (to avoid locale related sorting issues - or so we thought) - generate an md5() hash over the concatenation and compare that to known hashes This has worked nicely so far. However, recently a Norwegian user found that his setup sorts those md5() line hashes differently from other setups. Be his setup broken or not we'd like to avoid this issue in the future. What immediately comes to mind is to convert the md5() hex string to integer and sort by that (numeric sorting should, by all means, be universally acceptable) but, alas, it's to large even for bigint. So, I was thinking to extract parts of the string, convert those into ints and re-concatenate those ints into a string and sort by that - sorting digits-only strings should be pretty safe universally, too. However, I am not entirely sure whether I'd be running a higher risk of collisions that way. (Much simplified) example: md5 = x'fe' (I know an md5 can never really yield that value) part 1 = x'f' = 15 part 2 = x'e' = 14 string to sort by = '1514' (while in reality 'fe' = 254 = '254') I *think* I should be safe but would like to hear another opinion. --- I just realized this is not an SQL or even PostgreSQL related question at all so to justify my post: Should I be going about this sorting or hashing or detection business in another way entirely which can be done at the SQL level ? ;-) Thanks, Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Syntax error in a large COPY
Reg Me Please wrote: P.S. Why not including the pgloader into the main tarball? We are not attempting to include every useful tool in the database server. We're actually moving in the opposite direction: stuff has been offloaded to pgfoundry as appropriate. Add-on packages are encouraged. If your distributor does not carry packages for interesting pgFoundry projects, complain to them! -- Alvaro Herrera http://www.amazon.com/gp/registry/5ZYLFMCVHXC We're here to devour each other alive(Hobbes) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Postgresql simple query performance question
Il Wednesday 07 November 2007 13:47:26 SHARMILA JOTHIRAJAH ha scritto: Hi we are testing with version PostgreSQL 8.2.3. Why not using at least the current 8.2.5? Read here http://www.postgresql.org/docs/current/static/release.html for details. -- Reg me Please ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Exclusive Locks Taken on User Tables?
Hi, I'm seeing an EXCLUSIVE lock being taken on a table even though the documentation says that This lock mode is not automatically acquired on user tables by any PostgreSQL command. My SQL is UPDATE users SET online = $1 where username = $2 username is the PK on the users table. Other locks taken by the transaction are 1 RowExclusiveLock for the users table and 1 RowExclusiveLock on each of the 6 explict indexes on that table and another for the implicity users_pkey index. The result of these locks is that concurrent calls for the same statement are being serialized because the ExclusiveLock being requested is not being granted. Any thoughts on why this might be happening and what I could do to resolve it? Thanks, ---Marc
Re: [GENERAL] Exclusive Locks Taken on User Tables?
Version of postgres is 8.2.4. Maybe it will help to give more b/g on how I'm identifying the problem? The way this materializes as a real issue surrounds transactions left idle. There is a bug in our app that we haven't tracked down yet where on occasion we end up with connections marked IDLE in transaction. As a stop-gap for now, I wrote a script that does the following: 1) queries postgres to identify those connections select procpid as age from pg_stat_activity where user 'slony' and user 'kettle' and current_query = 'IDLE in transaction' and (now() - query_start) interval '1 minute' 2) When it finds PIDs that match the criteria, we run some diagnostic queries before killing the PIDs (to help us track down the bug in our app that's the root cause) a) List of non-idle statements select *, now() - query_start as age from pg_stat_activity where current_query 'IDLE' b) List of database locks SELECT pg_class.relname AS table, pg_database.datname AS database, transaction, pid, mode, granted FROM pg_locks, pg_class, pg_database WHERE pg_locks.relation = pg_class.oid AND pg_locks.database = pg_database.oid ORDER BY pg_class.relname, mode 3) It then kills the PIDs and sleeps for 30s before again printing a list of the non-idle statements that are running for more then 1 minute. At this point, I kill those because I presume they are deadlocked. It's in this second report that I always see that UPDATE statement and in the list of locks I see ExclusiveLock granted on the users table for one of the running pids but not the others. On Nov 6, 2007 3:01 PM, Tom Lane [EMAIL PROTECTED] wrote: Marc [EMAIL PROTECTED] writes: This is the query that I'm running to view locks: SELECT pg_class.relname AS table, pg_database.datname AS database, transaction, pid, mode, granted FROM pg_locks, pg_class, pg_database WHERE pg_locks.relation = pg_class.oid AND pg_locks.database = pg_database.oid ORDER BY pg_class.relname, mode I'm pretty sure this filters out transactionid lock types because I'm joining to pg_database and pg_class. Pls correct me if I'm wrong though. It won't filter out row-level locks on rows within tables. You're probably looking at a transient row lock taken by a blocked SELECT FOR UPDATE. You didn't show exactly what the real problem was, but I'm wondering if this is foreign-key conflicts in a pre-8.1 PG version. regards, tom lane
Re: [GENERAL] any way for ORDER BY x to imply NULLS FIRST in 8.3?
On 11/7/07, Tom Lane [EMAIL PROTECTED] wrote: A more general objection is that causing query semantics to change in subtle ways based on a GUC variable has more often than not proven to be a bad idea. On top of that, this is another one of those conversations that basically are predicated on the premise that other databases have quirks that make / encourage / allow the user to write bad SQL, and we need to do something so that their bad SQL will run properly on PostgreSQL. I work with 3 Oracle DBAs, and they are all trained by Oracle (the database, not the company) to write queries that make my brain hurt. Case statement? nope, they use encode. And there are dozens of cases where they use non-standard SQL, and they aren't going to stop any time soon, because it's just what they know. As someone who wishes we could switch case folding easily from lower to upper for some use cases, I understand the desire of folks to want things in pgsql to be easily switchable to fix these kinds of issues. But I don't think most of them are worth the effort and the bugs that could be introduced. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: DB on a ramdisk (was Re: [GENERAL] Temporary, In-memory Postgres DB?)
In response to Gauthier, Dave [EMAIL PROTECTED]: One question I had earlier that I don't think got answered was how to undo an initdb. dropdb drops a DB, but how do I undo an initdb? rm -rf the directory in which you put the initdb. -- Bill Moran http://www.potentialtech.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] list of postgres related unexpected 'features'
Sascha Bohnenkamp wrote: Is there a list postgres related unexpected 'features', like count(*) is expensive etc.? I do not ask to bash postgres, but to use it ... and it would be nice if I have not to try any pittfall by myself. Something like http://sql-info.de/postgresql/postgres-gotchas.html? Rainer ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] prepared statements suboptimal?
Tom Lane wrote: rihad [EMAIL PROTECTED] writes: I don't understand why postgres couldn't plan this: SELECT foo.e, foo.f FROM foo WHERE pk=$1 AND b=$2 AND status='1' AND c = $3; to be later executed any slower than SELECT foo.e, foo.f FROM foo WHERE pk='abcabc' AND b='13' AND status='1' AND c = '2007-11-20 13:14:15'; The reason is that without knowing the parameter values, the planner has to pick a generic plan that will hopefully not be too awful regardless of what the actual values end up being. When it has the actual values it can make much tighter estimates of the number of matching rows, and possibly choose a much better but special-purpose plan. As an example, if the available indexes are on b and c then the best query plan for the first case is probably bitmap indexscan on b. But in the second case, the planner might be able to determine (by consulting the ANALYZE stats) that there are many rows matching b='13' but very few rows with c = '2007-11-20 13:14:15', so for those specific parameter values an indexscan on c would be better. It would be folly to choose that as the generic plan, though, since on the average a one-sided inequality on c could be expected to not be very selective at all. Aha, thanks for a thorough explanation. Now I understand that while looking for a way to fulfill the query postgres will try hard to pick the one requiring the least number of rows visits. I've skimmed over my queries: almost all of them make use of the primary key as the first thing in the WHERE clause (say, a username, which is the only pk in the table): shouldn't that be enough for postgres to *always* decide to scan the pk's index (since a query on a pk always returns either one or zero results)? Same question for any number of joins where bar.id or baz.id is always aPK: select ... from foo JOIN bar ON(foo.bar_id=bar.id) JOIN baz ON(foo.baz_id=baz.id) WHERE asd=? AND dsa=?; ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] any way for ORDER BY x to imply NULLS FIRST in 8.3?
Simon Riggs [EMAIL PROTECTED] writes: On Wed, 2007-11-07 at 10:23 -0500, Tom Lane wrote: I put this in the same category as altering the identifier case-folding rules. That has much less effect on application portability, Really? Try counting the number of requests for that in the archives, vs the number of requests for this. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] any way for ORDER BY x to imply NULLS FIRST in 8.3?
Simon Riggs wrote: On Wed, 2007-11-07 at 16:05 +0100, Martijn van Oosterhout wrote: On Wed, Nov 07, 2007 at 02:37:41PM +, Simon Riggs wrote: Editing an application, you would be required to add the words NULLS FIRST to every single ORDER BY and every single CREATE INDEX in an application. If we know that is what people would do, why not have one parameter to do this for them? I find it hard to beleive that every single query in an application depends on the ordering of NULLs. In fact, I don't think I've even written a query that depended on a particular way of sorting NULLs. Is it really that big a deal? True, but how would you know for certain? You'd need to examine each query to be able to tell, which would take even longer. Or would you not bother, catch a few errors in test and then wait for the application to break in random ways when a NULL is added later? I guess that's what most people do, if they do convert. I'd like to remove one difficult barrier to Postgres adoption. We just need some opinions from people who *havent* converted to Postgres, which I admit is difficult cos they're not listening. May I, as an outsider, comment? :) I really think of ASC NULLS FIRST (and DESC NULLS LAST) as the way to go. Imagine a last_login column that sorts users that have not logged in as the most recently logged in, which is not very intuitive. I vote for sort_nulls_first defaulting to false in order not to break bc. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] md5() sorting
On Wed, Nov 07, 2007 at 05:36:47PM +0200, Marko Kreen wrote: I'm wondering if you cast the md5sum as a bytea instead of text and then sort, if that would solve it simply. Along the lines of ... ORDER BY decode(md5('...'), 'hex'); Maybe using digest(.., 'md5') function from pgcrypto would be better? It gives bytea immidiately. Sounds better at first, yes, but requires pgcrypto to be installed on machines onto which GNUmed is to be deployed. Thanks for the suggestion, though. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Temporary, In-memory Postgres DB?
Yes, I'm thinking of a small DB (could fit into mem). And the notion that all the data would reside in memory makes sense. But what about the metadata? And there is the question of the initdb and all the stuff it creates. That goes to disk, yes? no? Another question, but first my tenuous understanding of how dbs are created, up for critique... - initdb creates (on disk) all the stuff you need to have before you createdb. - createdb creates a db (puts it in the place you designated with initdb) - create schema can be used to create multiple schemas in a singel DB. - drop schema can be used to get rid of a schema within a db - dropdb can be used to get rid of a db that was created with createdb Q: How does one get rid of whatever gets created with initdb? Is it (gulp) just a rm -r ? I do have access to scratch disks which, in effect, could be used as a temporary storage area. IOW, if something goes wrong, and I don't get to delete a db that was created on the scratch disk, it'll get cleaned up for me overnight. It's just a question of how long it'll take to initdb + createdb + create a db model + load. Thanks for the expert advise ! -dave -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Ron Johnson Sent: Wednesday, November 07, 2007 10:17 AM To: pgsql-general@postgresql.org Subject: Re: [GENERAL] Temporary, In-memory Postgres DB? -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 11/07/07 09:03, Gauthier, Dave wrote: This is a real longshot, but here goes... Is there such a thing as a temporary, probably in-memory, version of a Postgres DB? Sort of like SQLite, only with the features/function of PG? A DB like this would exist inside of, and for the duration of, a script/program that created it, then vanish when the script/program ends. Probably not, but if not, then this would be (IMO) a great addition to have, something that'd really make it distinct from MySQL . I'd use SQLite, but I want to have stored functions and other real database features that it just doesn't have. If you have enough RAM, and your database is small enough, the OS will eventually cache the whole thing. I know that's not exactly what you're talking about, but I think it's as close as you'll get. - -- Ron Johnson, Jr. Jefferson LA USA Give a man a fish, and he eats for a day. Hit him with a fish, and he goes away for good! -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFHMdcAS9HxQb37XmcRAqD9AJ4usfOq49ApqnLOz9advUnRmc7q2QCdFa8s xAL+tMf4Xu4T4hGhvUCzomA= =QmE5 -END PGP SIGNATURE- ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] md5() sorting
On Wed, Nov 07, 2007 at 03:42:11PM +0100, Karsten Hilbert wrote: Should I be going about this sorting or hashing or detection business in another way entirely which can be done at the SQL level ? I'm wondering if you cast the md5sum as a bytea instead of text and then sort, if that would solve it simply. Have a nice day, -- Martijn van Oosterhout [EMAIL PROTECTED] http://svana.org/kleptog/ Those who make peaceful revolution impossible will make violent revolution inevitable. -- John F Kennedy signature.asc Description: Digital signature
Re: [GENERAL] Npsql is much faster than ODBC ?
Andrej Ricnik-Bay wrote: On Nov 7, 2007 2:40 PM, Rainer Bauer [EMAIL PROTECTED] wrote: That's nice to hear. But I respect licences as they are and the ODBCng driver is licenced under the GPL. That doesn't mean that you're not allowed to use it with commercial applications; it just means that you need to be happy to provide the source for it on request. Which is exactly the reason why the LGPL licence was created. So that any software can link against a library without the restrictions of the GPL. Rainer ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Npsql is much faster than ODBC ?
Rainer Bauer wrote: Alvaro Herrera wrote: Rainer Bauer wrote: Andrej Ricnik-Bay wrote: On Nov 7, 2007 2:40 PM, Rainer Bauer [EMAIL PROTECTED] wrote: That's nice to hear. But I respect licences as they are and the ODBCng driver is licenced under the GPL. That doesn't mean that you're not allowed to use it with commercial applications; it just means that you need to be happy to provide the source for it on request. Which is exactly the reason why the LGPL licence was created. So that any software can link against a library without the restrictions of the GPL. Keep in mind, though, that the ODBC driver is not linked to your app. It is only loaded on demand at run time, and can be replaced by any other ODBC driver. So AFAIU your application is shielded from GPL. IANAL of course. Neither am I. However, the GPL FAQ has an entry specially for this case: http://www.gnu.org/licenses/gpl-faq.html#NFUseGPLPlugins If the program dynamically links plug-ins, and they make function calls to each other and share data structures, we believe they form a single program, which must be treated as an extension of both the main program and the plug-ins. In order to use the GPL-covered plug-ins, the main program must be released under the GPL or a GPL-compatible free software license, and that the terms of the GPL must be followed when the main program is distributed for use with these plug-ins. ODBC drivers are loaded by ODBC driver manager - which is also dinamically linked library. Application calls functions from Driver Manager, and then manager goes farther to the driver's level of abstraction. Driver has no information about the parent application, and can't call any functions from it. Driver is not a plug-in, and application doesn't have to worry about its existance. The way I read this section is that linking to a GPL ODBC driver would imply that I have to release my program under a GPL (compatible) licence. This was one of the reasons why I added Postgres support to my program instead of MySQL [1]. They altered the licence for their drivers from LGPL to GPL so that you have to purchase a commercial licence. Rainer [1] In the meantime I am of course glad that I made this decision. I have not only learned a lot more about databases, but especially that Postgres is superior to MySQL ;-) Thanks, Andrei. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] any way for ORDER BY x to imply NULLS FIRST in 8.3?
Gregory Stark [EMAIL PROTECTED] writes: Tom Lane [EMAIL PROTECTED] writes: Not unless it's locked down at initdb time. Otherwise flipping the value bars you from using every existing index ... including those on the system catalogs ... which were made with the other setting. Surely if we added this we would also add explicit NULLS LAST clauses to all system catalog indexes and system views and make explicitly constructed scans in the backend use NULLS LAST. No, that's not the point; the point is that the performance of *user-issued* queries (or even more to the point, psql or pg_dump-issued queries) against the system catalogs would go to pot if they didn't match the catalog ordering, and a run-time-dependent interpretation of ORDER BY would make it very likely that the queries don't match, no matter which underlying index ordering is installed. Now, most if not all of the system indexes are on NOT NULL columns, so one possible avenue to resolving that objection would be to teach the planner that null sort direction can be disregarded when determining whether an index on a not-null column matches a query. But that already is making the patch 10x larger and more subtle than what Simon thinks he's proposing; and I doubt it's the only change we'd find we needed. A more general objection is that causing query semantics to change in subtle ways based on a GUC variable has more often than not proven to be a bad idea. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Npsql is much faster than ODBC ?
Rainer Bauer wrote: Andrej Ricnik-Bay wrote: On Nov 7, 2007 2:40 PM, Rainer Bauer [EMAIL PROTECTED] wrote: That's nice to hear. But I respect licences as they are and the ODBCng driver is licenced under the GPL. That doesn't mean that you're not allowed to use it with commercial applications; it just means that you need to be happy to provide the source for it on request. Which is exactly the reason why the LGPL licence was created. So that any software can link against a library without the restrictions of the GPL. Keep in mind, though, that the ODBC driver is not linked to your app. It is only loaded on demand at run time, and can be replaced by any other ODBC driver. So AFAIU your application is shielded from GPL. IANAL of course. Saith the GPL: These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. [...] In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: DB on a ramdisk (was Re: [GENERAL] Temporary, In-memory Postgres DB?)
Ron Johnson [EMAIL PROTECTED] writes: On 11/07/07 09:58, Tom Lane wrote: Or put it on a ramdisk filesystem. But doesn't that just add more overhead and reduce the amount of memory that the OS can cache things in? It's very possibly not a win, but the kinds of people who ask this question at all do not understand the concept of caching, so I'm sure they'll be happier with a solution where the data demonstrably never hits disk ;-) A case where it could be a win is where you are thrashing the DB with heavy update load. Even if everything is cached there will be a pretty serious amount of disk write traffic, which'd possibly interfere with other system activity. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: DB on a ramdisk (was Re: [GENERAL] Temporary, In-memory Postgres DB?)
On 11/7/07, Ron Johnson [EMAIL PROTECTED] wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 11/07/07 09:58, Tom Lane wrote: Ron Johnson [EMAIL PROTECTED] writes: On 11/07/07 09:03, Gauthier, Dave wrote: Is there such a thing as a temporary, probably in-memory, version of a Postgres DB? If you have enough RAM, and your database is small enough, the OS will eventually cache the whole thing. Or put it on a ramdisk filesystem. But doesn't that just add more overhead and reduce the amount of memory that the OS can cache things in? Didn't say it was the smart thing to do. Just that you could do it. I think if one is looking at in memory databases, PostgreSQL is NOT the first choice really. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] any way for ORDER BY x to imply NULLS FIRST in 8.3?
On Wed, 2007-11-07 at 10:23 -0500, Tom Lane wrote: Simon Riggs [EMAIL PROTECTED] writes: Perhaps we can have a parameter? default_null_sorting = 'last' # may alternatively be set to 'first' Not unless it's locked down at initdb time. Otherwise flipping the value bars you from using every existing index ... including those on the system catalogs ... which were made with the other setting. Seems reasonable, as a first step. There are a number of things that need to be moved from initdb to be settable parameters, so this is just one of them, for later releases. We should be able to enforce one setting of the parameter at bootstrap time, so the system indexes all get built the standard way with the initdb locale. We can then be free to set the locale for indexes after that, but that is another issue. I put this in the same category as altering the identifier case-folding rules. That has much less effect on application portability, so although the issues are similar the importance is not. Yeah, it'd be great to be all things to all people, but the implementation pain and risk of breakage of existing applications isn't worth it. I don't suggest we should be _all_ things to _all_ people, just that we should try to be provide our capabilities to _more_ people. I think its a great feature and I want to see more people appreciate that. -- Simon Riggs 2ndQuadrant http://www.2ndQuadrant.com ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Temporary, In-memory Postgres DB?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 11/07/07 09:03, Gauthier, Dave wrote: This is a real longshot, but here goes... Is there such a thing as a temporary, probably in-memory, version of a Postgres DB? Sort of like SQLite, only with the features/function of PG? A DB like this would exist inside of, and for the duration of, a script/program that created it, then vanish when the script/program ends. Probably not, but if not, then this would be (IMO) a great addition to have, something that'd really make it distinct from MySQL . I'd use SQLite, but I want to have stored functions and other real database features that it just doesn't have. If you have enough RAM, and your database is small enough, the OS will eventually cache the whole thing. I know that's not exactly what you're talking about, but I think it's as close as you'll get. - -- Ron Johnson, Jr. Jefferson LA USA Give a man a fish, and he eats for a day. Hit him with a fish, and he goes away for good! -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFHMdcAS9HxQb37XmcRAqD9AJ4usfOq49ApqnLOz9advUnRmc7q2QCdFa8s xAL+tMf4Xu4T4hGhvUCzomA= =QmE5 -END PGP SIGNATURE- ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Recovering / undoing transactions?
Hi, About a month or so ago I read a blog entry or an article which seems to have described a method, using dirty hackery with pg_resetxlog and possibly other tools, to forcibly undo the database to a previous state. The problem described was that some employee had executed a DELETE or UPDATE without WHERE or something like it in autocommit mode and the goal was to undo it. I can't find the article now so can someone describe the technique here or point to the article? (I'm possibly misremembering important details about the article so the correct answer to my question could be no, it can't be done). ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Npsql is much faster than ODBC ?
On Wed, 7 Nov 2007, Rainer Bauer wrote: The way I read this section is that linking to a GPL ODBC driver would imply that I have to release my program under a GPL (compatible) licence. What you actually link against is the ODBC implementation for your platform. If you're on something UNIX-ish, you're probably linking against unixODBC, which is available under the LGPL presumably to avoid this issue. -- * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Postgresql simple query performance question
Hi, The table has 43 columns. I have attached the columns-list.They have many char() and numeric columns. For the table size, these are the corresponding entries from the pg_class foo is the table and the others are some of its indexes. relname reltuples relpages foo 2.9384E7 825699 foo_idx_pat 2.9384E7 684995 foo_idx_service 2.9384E7 433549 foo_idx_serv 2.9384E7 433435 foo_pk 2.9384E7 109057 Thanks Sharmila - Original Message From: Gregory Stark [EMAIL PROTECTED] To: SHARMILA JOTHIRAJAH [EMAIL PROTECTED] Cc: Pavel Stehule [EMAIL PROTECTED]; pgsql-general@postgresql.org Sent: Tuesday, November 6, 2007 8:03:48 PM Subject: Re: [GENERAL] Postgresql simple query performance question SHARMILA JOTHIRAJAH [EMAIL PROTECTED] writes: I understand that. But why is that when oracle is given a hint to do full table scan instead of using index to get the count, it is still faster than postgres when both has the same explain plan? Oracle takes 34 sec and postgres takes 1 m10 sec . Is there anything that can be done in postgresql for speeding this up? How large are the actual respective data files? What are the columns in these tables? Do you have many char() and NUMERIC columns? -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's On-Demand Production Tuning ---(end of broadcast)--- TIP 6: explain analyze is your friend __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com rmrs=# \d foo Column |Type | Modifiers ---+-+--- foo_id| numeric(22,0) | not null institution_id| numeric(22,0) | not null patient_id| numeric(22,0) | not null top_parent_service_code | character varying(40) | not null top_parent_service_sys_id | numeric(22,0) | not null physiologic_time | timestamp without time zone | not null top_parent_filler_order_id| numeric(22,0) | dewey_decimal_tree_sort_order | character varying(100) | not null sister_sequence_number| numeric(22,0) | service_code | character varying(40) | not null service_sys_id| numeric(22,0) | not null filler_order_id | numeric(22,0) | immediate_variable_id | numeric(22,0) | data_arrival_time | timestamp without time zone | default now() specimen_id | numeric(22,0) | value_type| character varying(40) | value_text_for_display| character varying(1010) | value_modifier_text | character varying(1000) | value_if_type_is_coded_code | character varying(40) | value_if_type_is_coded_sys_id | numeric(22,0) | value_if_type_is_numeric | double precision| value_if_type_is_provider_id | numeric(22,0) | value_if_type_is_location_id | numeric(22,0) | value_if_type_is_time | timestamp without time zone | status_code | character varying(40) | clinical_status_code | character varying(40) | interpretation_code | character varying(40) | off_scale_exception_code | character varying(40) | delta_check_code | character varying(40) | producer_application_id | numeric(22,0) | producer_location_id | numeric(22,0) | origination_code | character varying(40) | delivering_message_id | numeric(22,0) | deliv_sub_id | character varying(40) | deliv_value | character varying(100) | deliv_value_text | character varying(1000) | deliv_value_code_system | character varying(100) | value_quantitative_form | character varying(40) | deliv_service_code| character varying(40) | deliv_service_code_text | character varying(100) | deliv_service_code_system | character varying(40) | deliv_unit_code | character varying(40) | deliv_unit_text | character varying(100) | deliv_unit_code_system| character varying(40) | deliv_normal_range| character varying(100) | normal_range_text | character varying(100) | value_modifier_text_type_code
Re: [GENERAL] Npsql is much faster than ODBC ?
Thanks. I have tried your program. But it seems that it has trouble with EUC_CN. It returns unrecoginzed value of empty columns. BTW: I've solved this problem. Every one using odbc on windows shuold read this mail: psqlODBC with Visual Studio 2005 and Connection Pooling for newbieshttp://archives.postgresql.org/pgsql-odbc/2005-12/msg00274.php On Nov 7, 2007 4:48 AM, Alvaro Herrera [EMAIL PROTECTED] wrote: ? wrote: Hi all, Recently I found my C/S program becomes slow. So I analysed the tcp traffic between my program and the Postgresql database. I found there are occasionally tcp re-transmission and that's why my porgram slow down. But when I changed to npsql, the tcp traffic was cut off to only 1/4 and there are much fewer re-transmission. I am not sure it's the problem of odbc or it's because that I am using odbc correctly. FYI there's another Postgres ODBC driver that is said to have better performance. https://projects.commandprompt.com/public/odbcng (Yes, my company maintains it) -- Alvaro Herrera http://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- 古希腊有一种哲人,穿着很宽的袍子走来走去。他们会划优美的曲线,其中包含了自己全部的心胸。他们的朋友也是智者,只有智者们互相之间才能明白彼此的曲线彼此的心胸。
Re: [GENERAL] Postgresql simple query performance question
Hi we are testing with version PostgreSQL 8.2.3. We already have a production system in Oracle and we wanted to migrate it to postgresql. If some tests are already done, are the results available for us to see? Ill also check postgres 8.3 beta. Thanks again Sharmila - Original Message From: Simon Riggs [EMAIL PROTECTED] To: Bill Moran [EMAIL PROTECTED] Cc: SHARMILA JOTHIRAJAH [EMAIL PROTECTED]; pgsql-general@postgresql.org Sent: Wednesday, November 7, 2007 6:34:26 AM Subject: Re: [GENERAL] Postgresql simple query performance question On Tue, 2007-11-06 at 09:29 -0500, Bill Moran wrote: In response to SHARMILA JOTHIRAJAH [EMAIL PROTECTED]: Hi We are in the process of testing for migration of our database from Oracle to Postgresql. I hava a simple query Select count(*) from foo This is asked a lot. The quick answer is that PostgreSQL method of MVCC makes it impossible to make this query fast. Perhaps, someday, some brilliant developer will come up with an optimization, but that hasn't happened yet. What release level is being tested? It may already have happened. 8.3 is substantially faster at seq scans, so the tests should be re-run on 8.3 beta. Also, re-run the Postgres test. It should be faster the second time, even if the database server is restarted between tests. -- Simon Riggs 2ndQuadrant http://www.2ndQuadrant.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
Re: [GENERAL] any way for ORDER BY x to imply NULLS FIRST in 8.3?
Simon Riggs wrote: If an application has already made that choice then we should allow them the opportunity to work with PostgreSQL. The application may be at fault, but PostgreSQL is the loser because of that decision. The SQL Standard says that the default for this is defined by the implementation; that doesn't bar us from changing the implementation if we wish. We can do that without changing PostgreSQL's historic default. Perhaps we can have a parameter? default_null_sorting = 'last' # may alternatively be set to 'first' (or another wording/meaning.) That is what I thought you'd implemented, otherwise I would have suggested this myself way back. This new parameter would be a small change, but will make a major difference to application portability. This seems like the key to unlocking your new functionality for most people. You already have that control at the SQL SELECT level so you are just avoiding typing to add the GUC parameter. I think we need more requests for such a feature before we add it. -- Bruce Momjian [EMAIL PROTECTED]http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Syntax error in a large COPY
In response to Thomas Kellerer [EMAIL PROTECTED]: Tom Lane, 07.11.2007 06:14: Thomas Kellerer [EMAIL PROTECTED] writes: If everyone simply top-posted, there would be no need for me to scroll down, just to find a two line answer below a forty line quote - which I personally find more irritating than top-posting. I think you're ignoring my basic point, which was that people shouldn't be quoting forty lines' worth in the first place. *Especially* not if they only have two lines to contribute. No, I did get your point. My point is: with top-posting I don't care how many lines were repeated because I don't have to scroll. Considering there is an RFC that recommends inline posting over top-posting (http://tools.ietf.org/html/rfc1855), and considering the fact that this topic has been beat to death on dozens of mailing lists and the predominant preference is _not_ for top-posting -- perhaps you should either follow the preferences of the group, or leave the group. But this horse has been beat to death before... Obviously not, as it keeps coming back to life. I guess it's an undead horse? -- Bill Moran http://www.potentialtech.com ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] number errors
Hi all, When my application returns errors from database, some numbers errors is equals. Why number errors is equals? odbc driver or postgresql return this? It's run in Windows. Thanks. Hello, If you use ODBC - you should devide error from ODBC driver and errors from PostgreSQL, ODBC driver return it's own error codes, and composes error Description depending on Error Code and Text from PostgreSQL server. So you should have numbers: 1) ODBC error code - described in MSDN; 2) Native PostgreSQL error code - described in PostgreSQL manual; 3) Error description - composed by ODBC driver, based on description and error code, returned from server. Hi, Thanks for help. I'm using ODBC Driver, so the ODBC driver returns the errors, ok? So, this errors is independent from PostgreSQL server? ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] odbcng
On Tue, Nov 06, 2007 at 05:48:12PM -0300, Alvaro Herrera wrote: FYI there's another Postgres ODBC driver that is said to have better performance. https://projects.commandprompt.com/public/odbcng (Yes, my company maintains it) Are there any known issues when calling it from VB? I've got a VB (MS Access) client that uses PG as its backend and it seems to die horribly when doing any sort of query that returns a text column. This happens through either DAO or ADO; though DAO gives the error the size of a field is too long, ADO just segfaults. For example, the following code doesn't work for me. Looks like the sort of thing that that should get lots of test coverage so maybe it's something on my system. Public Sub test() Dim con As ADODB.Connection, rs As ADODB.Recordset Set con = New ADODB.Connection con.Open DSN=badgerstudy Set rs = con.Execute(SELECT 1, 'foo'::TEXT, 'bar') While Not rs.EOF rs.MoveNext Wend End Sub Thanks, Sam ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Syntax error in a large COPY
On 11/8/07, Collin Kidder [EMAIL PROTECTED] wrote: I'm with Thomas. I think that, while inline posting is a good thing, bottom posting is dead stupid and wastes my time. Just as bad as top-posting, really. It is far easier to follow a thread with top posting as the relevant text is right there at the top ready to be read. The relevant bit being what? Two lines dangling loosely at the top of a mail? You omitted the crucial bit here: It is far easier FOR ME WITH MY CURRENT MAIL CLIENT to follow a thread with top posting ... If that's good enough reason for you to ignore RFCs and complain about the habit on this list, by all means, there's no point in arguing. But we can flog the dead horse some more No, just not everyone agrees with your viewpoint on this topic. Top posting has it's place and some of us prefer it. But they could just adhere to the law of the land, or when in Rome, ... practice instead of kicking off fuss. And with my mail client top-posting has no place. Let's just stick to good old standards. [ ... offensive material removed ... ] And so... with that my view is out there. I hate bottom posting. But I for one will do it to keep the peace. You were actually using the appropriate interleaved quoting style, not bottom posting (minus the 'trimming', mind you). At least get your terminology right. ;D Cheers, Andrej -- Please don't top post, and don't use HTML e-Mail :} Make your quotes concise. http://www.american.edu/econ/notes/htmlmail.htm ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: DB on a ramdisk (was Re: [GENERAL] Temporary, In-memory Postgres DB?)
I understand caching. Here's the reason I'm inquiring into this line of thought... I already have a big on-disk DB with lots and lots of data, and lots of stored functions and a data model that works with DB loaders and other code that queries out data. I also have users that want all of that, except for the particular data content. They basically want to load a DB with data that's in their scratch area without polluting what's in the main DB. The cardinality of this personal, scratch data will be orders of magnitude smaller than what's in the main (could all fit in memory). And once loaded, they would be able to run all the same DB load and query tools that work on the main DB, just redirect to the small, personal DB. This would be a good app for SQLite, but SQLite can't do a lot of the things that are running in the main DB (like stored procedures). It's become clear that PG cannot do a pure in-memory DB like SQLite. It's why I initially called this a longshot and the answer to my question is probably no. But enabling something like a pure in-memory (and temporary) DB for small apps that can reap all the wonderful features of PG would make it very attractive for some users. Just something to think about for future development. One question I had earlier that I don't think got answered was how to undo an initdb. dropdb drops a DB, but how do I undo an initdb? -dave -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane Sent: Wednesday, November 07, 2007 12:05 PM To: Ron Johnson Cc: pgsql-general@postgresql.org Subject: Re: DB on a ramdisk (was Re: [GENERAL] Temporary, In-memory Postgres DB?) Ron Johnson [EMAIL PROTECTED] writes: On 11/07/07 09:58, Tom Lane wrote: Or put it on a ramdisk filesystem. But doesn't that just add more overhead and reduce the amount of memory that the OS can cache things in? It's very possibly not a win, but the kinds of people who ask this question at all do not understand the concept of caching, so I'm sure they'll be happier with a solution where the data demonstrably never hits disk ;-) A case where it could be a win is where you are thrashing the DB with heavy update load. Even if everything is cached there will be a pretty serious amount of disk write traffic, which'd possibly interfere with other system activity. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] strange timezone problem
On Wed, 7 Nov 2007, Tom Lane wrote: Nick Johnson [EMAIL PROTECTED] writes: I'm using PostgreSQL 8.2.3 and seeing this behaviour with timezones: [snip] Shouldn't that second row have been in the results of the second query? Huh? Those results look perfectly sane to me. Ah, you're right of course. Just a complete mental lapse on my part (thus the need of a 'sanity' check). That's correct ... as of last year, DST extends through the first Sunday in November in the USA. Twice in one morning the database was cleverer than me. Nick ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] any way for ORDER BY x to imply NULLS FIRST in 8.3?
Simon Riggs [EMAIL PROTECTED] writes: Perhaps we can have a parameter? default_null_sorting = 'last' # may alternatively be set to 'first' Not unless it's locked down at initdb time. Otherwise flipping the value bars you from using every existing index ... including those on the system catalogs ... which were made with the other setting. I put this in the same category as altering the identifier case-folding rules. Yeah, it'd be great to be all things to all people, but the implementation pain and risk of breakage of existing applications isn't worth it. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Temporary, In-memory Postgres DB?
On 11/7/07, Gauthier, Dave [EMAIL PROTECTED] wrote: This is a real longshot, but here goes... Is there such a thing as a temporary, probably in-memory, version of a Postgres DB? Sort of like SQLite, only with the features/function of PG? A DB like this would exist inside of, and for the duration of, a script/program that created it, then vanish when the script/program ends. Mount a ramdisk, initdb there, run db from there. Conversely, create the db normally, mount a ram disk, create a tablespace there, create a db and it's tables there. The second method might not be optimal because if you don't cleanly remove the db / tablespace postgresql might have some issues starting up after a reboot. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Npsql is much faster than ODBC ?
Greg Smith wrote: On Wed, 7 Nov 2007, Rainer Bauer wrote: The way I read this section is that linking to a GPL ODBC driver would imply that I have to release my program under a GPL (compatible) licence. What you actually link against is the ODBC implementation for your platform. If you're on something UNIX-ish, you're probably linking against unixODBC, which is available under the LGPL presumably to avoid this issue. My program runs under MS Windows only. Which means the Microsoft ODBC driver manager is closed source. I have found the original announcement from CommandPrompt and it seems that this topic was already discussed before: http://archives.postgresql.org/pgsql-odbc/2005-04/msg00084.php Rainer ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] prepared statements suboptimal?
rihad wrote: Hi, I'm planning to use prepared statements of indefinite lifetime in a daemon that will execute same statements rather frequently in reply to client requests. This link: http://www.postgresql.org/docs/8.3/static/sql-prepare.html has a note on performance: In some situations, the query plan produced for a prepared statement will be inferior to the query plan that would have been chosen if the statement had been submitted and executed normally. [...] I don't understand why postgres couldn't plan this: SELECT foo.e, foo.f FROM foo WHERE pk=$1 AND b=$2 AND status='1' AND c = $3; to be later executed any slower than SELECT foo.e, foo.f FROM foo WHERE pk='abcabc' AND b='13' AND status='1' AND c = '2007-11-20 13:14:15'; For example, if the table contains almost no rows in the beginning, the planner will choose to use a full table schan even if - say - 'pk' is the primary key. If you use the same execution plan later when the table is big, the full table scan will hurt considerably, and you would be much better of with an index lookup. Other scenarios are certainly conceivable, but this one is easy to understand. Can I help it make more educated guesses? In what scenarios could prepared statements turn around and bite me, being slower than simple queries? Is this a real problem in practice? Should I refresh prepared statements from time to time? If so, how? Only by deallocating them and preparing anew? Any knob to tweak for that? You'll probably have to deallocate them and allocate them anew. Yours, Laurenz Albe ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] md5() sorting
On Wed, Nov 07, 2007 at 03:54:02PM +0100, Martijn van Oosterhout wrote: Should I be going about this sorting or hashing or detection business in another way entirely which can be done at the SQL level ? I'm wondering if you cast the md5sum as a bytea instead of text and then sort, if that would solve it simply. Along the lines of ... ORDER BY decode(md5('...'), 'hex'); ? I knew I'd ask here ;-) BTW, my Google Fu was lacking or I'd have found this earlier: http://www.varlena.com/GeneralBits/21.php (see middle of page) Thanks, Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Rollback capacity
Once again, I'm trying to translate my knowledge of Informix to PostgreSQL. I tried the manual and Google, but could not find anything relevant. Informix keeps transaction logs in a dedicated, pre-allocated disk area that, until very recent versions, could not grow dynamically. It is the DBA's responsibility to continually backup these transaction logs so the space may be recycled. As such, Informix is limited in the size of a transaction that it can roll back, because it eventually has to re-use existing transaction log space. If it were to overwrite the log space containing the beginning of the transaction, it could not rollback from the internal logs. So if you do something crazy, like delete 4 million rows, there's a good chance Informix will just throw an error long transaction aborted and roll it back when the transaction reaches a pre-set high water mark. How does PostgreSQL handle big transactions and potential rollbacks. Since the WAL is not strictly pre-allocated space, can it just keep going until the WAL files fill up the free disk space? What would be the consequences of such an incident (filling up disk space with WAL files)? Is the WAL even relevant to rollbacks? I am aware of the statement_timeout parameter which could prevent huge transactions, but there is no useful correlation between the time a statement takes and the server's capacity to roll it back. Thanks, Jeff ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Rollback capacity
Jeff Larsen [EMAIL PROTECTED] writes: Informix keeps transaction logs in a dedicated, pre-allocated disk area that, until very recent versions, could not grow dynamically. It is the DBA's responsibility to continually backup these transaction logs so the space may be recycled. As such, Informix is limited in the size of a transaction that it can roll back, because it eventually has to re-use existing transaction log space. Yeah, Oracle has that problem too. Postgres keeps the old row versions in the main data area, so the disk space cost of a long transaction is paid out of your main data store, not any dedicated area. Once the transaction is committed or rolled back, a subsequent VACUUM will eventually make the now-redundant space available for re-use. Our way has its pluses and minuses compared to the other, but it's definitely quite different. Instead of worrying about transaction log size, you worry about how often to VACUUM. Since the WAL is not strictly pre-allocated space, can it just keep going until the WAL files fill up the free disk space? What would be the consequences of such an incident (filling up disk space with WAL files)? Is the WAL even relevant to rollbacks? It's not; the amount of WAL space needed is determined only by the checkpoint spacing. (You can have transactions that run much longer than the checkpoint interval.) If you do run out of space for WAL, the database PANICs and shuts down (but without losing any committed transactions, so you can restart once you've cleared off some space). One advantage to keeping WAL and main data area on separate partitions is that then bloat of the main data area cannot lead to a PANIC of this type, though out-of-space in the data area is still going to lead to failures of inserts and updates. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Recovering / undoing transactions?
Ivan Voras [EMAIL PROTECTED] writes: About a month or so ago I read a blog entry or an article which seems to have described a method, using dirty hackery with pg_resetxlog and possibly other tools, to forcibly undo the database to a previous state. The problem described was that some employee had executed a DELETE or UPDATE without WHERE or something like it in autocommit mode and the goal was to undo it. I can't find the article now so can someone describe the technique here or point to the article? (I'm possibly misremembering important details about the article so the correct answer to my question could be no, it can't be done). It's not really possible to do that. The blogger might've thought he'd accomplished something but I seriously doubt that his database was consistent afterward. You can go back in time using PITR, if you had the foresight and resources to set up continuous archiving, but just whacking pg_xlog around is far from sufficient. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Recovering / undoing transactions?
On 07/11/2007, Tom Lane [EMAIL PROTECTED] wrote: It's not really possible to do that. The blogger might've thought he'd accomplished something but I seriously doubt that his database was consistent afterward. You can go back in time using PITR, if you had the foresight and resources to set up continuous archiving, but just whacking pg_xlog around is far from sufficient. Ok, just to verify I'm thinking about it in the right way: in abstract, with PITR, I would need a known-good starting point (e.g. a full backup) + files from pg_xlog created from the time of the starting-point, then restore the starting-point backup and then restore from PITR/xlog up to the point I want? ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] any way for ORDER BY x to imply NULLS FIRST in 8.3?
Reece Hart [EMAIL PROTECTED] writes: However, it's not clear that you've considered a clause like 'ORDER BY (foo IS NULL), foo', which I believe is not implementation dependent. Yeah, that should work reasonably portably ... where portable means equally lousy performance in every implementation, unfortunately :-(. I rather doubt that many implementations will see through that to decide that they can avoid an explicit sort. Well, an index on ((foo IS NULL), foo) might improve the performance when sorting along these columns, but sure it's not a cure-all. And you still have to modify the SQL and the database schema ... regards TV ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] System V IPC on Windows
Does anyone know how to adjust the IPC settings in Windows? If I wanted to increase shared_buffers settings, in linux I would simply adjust the SHMMAX and SHMMIN settings, following the docs (http://www.postgresql.org/docs/8.2/static/kernel-resources.html#SYSVIPC). In Windows, ? Cheers, Kevin ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Recovering / undoing transactions?
Ivan Voras [EMAIL PROTECTED] writes: Ok, just to verify I'm thinking about it in the right way: in abstract, with PITR, I would need a known-good starting point (e.g. a full backup) + files from pg_xlog created from the time of the starting-point, then restore the starting-point backup and then restore from PITR/xlog up to the point I want? Right. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] System V IPC on Windows
Kevin Neufeld [EMAIL PROTECTED] writes: Does anyone know how to adjust the IPC settings in Windows? There aren't any such settings in Windows, AFAIK. There's certainly not anything directly corresponding to SHMMAX, say. What have you run into that makes you think you need to adjust something? regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] what is the date format in binary query results
What can I expect for a date format from a PGresult containing binary results? Specifically the Oid type is TIMESTAMPTZOID. In this case what does the PQgetvalue actually return? What does the char* point to? Thanks. - samantha
Re: [GENERAL] Syntax error in a large COPY
Andrej Ricnik-Bay wrote: On 11/8/07, Collin Kidder [EMAIL PROTECTED] wrote: I'm with Thomas. I think that, while inline posting is a good thing, bottom posting is dead stupid and wastes my time. Just as bad as top-posting, really. It is far easier to follow a thread with top posting as the relevant text is right there at the top ready to be read. The relevant bit being what? Two lines dangling loosely at the top of a mail? You omitted the crucial bit here: It is far easier FOR ME WITH MY CURRENT MAIL CLIENT to follow a thread with top posting ... If that's good enough reason for you to ignore RFCs and complain about the habit on this list, by all means, there's no point in arguing. But we can flog the dead horse some more Offtopic, but what actually confuses me most is people replying to quoted text and not putting a blank line before their additional text, like above. It makes it very hard for me to quickly see the next text. FYI, I only top post when I want talk talk _about_ the email, like Is this a TODO item, and put a dashed line under my text so people realize I top-posted and there is nothing new below my text. -- Bruce Momjian [EMAIL PROTECTED]http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] System V IPC on Windows
Does anyone know how to adjust the IPC settings in Windows? There aren't any such settings in Windows, AFAIK. Correct. The only real adjustable limit is the size of the Windows pagefile, but that one is normally dynamic. But there must be room for all the shared memory in it. It's not going to be there, but the space is reserved. That said, if you need to increase the pagefile size to accomodate your shared buffers, you likely have way too large value for shared buffers. /Magnus ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings