[ADMIN] How to test Miscellaneous Functions
Hello All I am trying to test the following function, But I dont know how to test these functions. Please help me how to test these functions, and also give some examples 1. pg_get_viewdef 2. pg_get_ruledef 3. pg_get_indexdef 4. pg_get_userbyid 5. obj_description thanks in advance --mathan --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.491 / Virus Database: 290 - Release Date: 6/18/2003 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] Backup procedure
I think its fair to say that the documentation is outdated on this subject, becuase it had in mind a different definition of consistent snapshots. See: http://marc.theaimsgroup.com/?l=postgresql-adminw=2r=1s=LVM+snapshotsq=b where Tom Lane and Peter Eisentraut agree that if your hardware or software can copy the data and log files from a given instant in time, you have a consistent backup. (In that thread, you can even see examples for the postmaster logs at startup with the backup data. ) Bruce Momjian makes the same point earlier in this thread as well. Note that the way to recover data from filesystem level backup is to start a postmaster on the backup copy of the $PGDATA cluster, dump from the backup and restore into the production databases. (Piping this will avoid having to create dump files to restore from.) You can't simply copy files around to recover data from backup. The bottom line is that if you want backups that are as reliable as pgdump-s (the gold standard), you can get them off of filesystem snapshots. Filesystem snapshots are a lot faster than pgdump-s. Cheers, Murthy -Original Message- From: Enio Schutt Junior [mailto:[EMAIL PROTECTED] Sent: Wednesday, August 13, 2003 13:12 To: [EMAIL PROTECTED] Subject: [ADMIN] Backup procedure Hi, According to some answers about the consistency of a database in a snapshot of a whole HD (excluding proc, tmp, and dev, of course) and without stopping the postmaster service, it is said it should work because it's like if the system has crashed, so restoring files from the HD and starting the server would be ok. According to postgresql administrator's guide, section 9.2 (File system level backup), item #1: The database server *must* be shutdown in order to get a usable backup. Half-way measures such as disallowing all connections will not work as there is always some buffering going on. For this reason it is also not advisable to trust file systems that claim to support consistent snapshots . Assuming the admin's guide is a little bit more right, my backup restore procedure would be as follows: 1-Restore data from the backup Hd into a new Hd. 2-Copy some files from /usr/local/pgsql/data like postgresql.conf, pg_hba.conf, pg_ident.conf, and some other authentication files whose names appear in pg_hba.conf (I suspect PG_VERSION and postmaster.opts would not be needed as initdb should create these files) to another directory. 3-Remove the entire /usr/local/pgsql/data directory 4-Run initdb -D /usr/local/pgsql/data 5-Copy those files back to /usr/local/pgsql/data 6-Start the server 7-Restore information generated by pg_dump (tables) and pg_dumpall -g (users and groups) Would this work well? I know there are other procedures and I would be pleased to know that this procedure works fine. Thanks in advance, Enio ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[ADMIN] forcce a table in memory
Is there any way in postgres that you canforce a table to be in memory.Tired of spam? Get advanced junk mail protection with MSN 8.
Re: [ADMIN] Backup routine
On Wed, 13 Aug 2003, Murthy Kambhampaty wrote: On Monday, August 11, 2003 17:26, [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] wrote: From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Monday, August 11, 2003 17:26 To: [EMAIL PROTECTED] Subject: Re: [ADMIN] Backup routine In article [EMAIL PROTECTED], Bruce Momjian [EMAIL PROTECTED] wrote: Also, I assume you have to stop the server just for a moment while you do the freeze, right? It depends on if you need known state or just consistent state. Taking a snapshot of the system will get you a consistent state just like if the machine crashed. You can restore that snapshot, bring PG back up and everything will work. Of course, you really have no way of knowing what transactions were commited and what were not. On the other hand, stop the server/snapshot/start the server gives you not only consistency, but a known state. That is, you know for sure that whatever was done before you stopped the server is what was done. But these considerations apply to pg_dump-s as well, no? I guess with pg_dump you CAN dump one database at a time, and you can quiesce each database before dumping -- disallow connections to that database for the duration of the pg_dump, and wait for all transactions to complete before starting pg_dump -- which is a little more flexible. Given the time it takes to do a pg_dump on databases over a few gigabytes in size, though, I can't say I find the flexibility valuable. But that's still not exactly the same. If you pg_dump a single database in a cluster, THAT database will be consistent to itself, guaranteed by MVCC. Sure, one database in a cluster may not be consistent with another database, but generally, seperate databases are treated / considered to be independent. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[ADMIN] Backup procedure
Hi, According to some answers about the consistency of a database in a snapshot of a whole HD (excluding proc, tmp, and dev, of course)and without stopping the postmaster service, it is said it should work because it's like if the system has crashed, so restoring files from the HD and starting the server would be ok. According to postgresql administrator's guide, section 9.2 (File system level backup), item #1: "The database server *must* be shutdown in order to get a usable backup. Half-way measures such as disallowing all connections will not work as thereis always some buffering going on. For this reason it is also not advisable to trust file systems that claim to support "consistent snapshots" ". Assuming the admin's guide is a "little bit more" right, my backup restore procedure wouldbe as follows: 1-Restore data from the backup Hd into a new Hd.2-Copy some files from /usr/local/pgsql/data like postgresql.conf, pg_hba.conf, pg_ident.conf, and some other authentication files whose names appear in pg_hba.conf (I suspect PG_VERSION and postmaster.opts would not be needed as initdb should create these files) to another directory.3-Remove the entire /usr/local/pgsql/data directory4-Run initdb -D "/usr/local/pgsql/data"5-Copy those files back to /usr/local/pgsql/data6-Start the server7-Restore information generated by pg_dump (tables) and pg_dumpall -g (users and groups) Would this work well? I know there are other procedures and I would bepleased to know that this procedure works fine. Thanks in advance, Enio
[ADMIN] copying databases w/ indexes
I have two questions about restoring and moving databases (in pg 7.3), to which I have not been able to find a suitable answers in the documentation (and searching the list archives has been painfully slow and fruitless). 1) What is the fastest way to restore a database *with indexes*? We have situation where I need to make a duplicate of a production database for development work, so that we can develop code without interfering with the production database. I use pg_dump for this, but I have not been able to get pg_restore to work (usually because of constraint problems). So I do the restores from a text dump piped into psql (with all the constraints applied last). Then the problem is that reindexing takes a very long time. One particular index is taking on the order of 12 hours to finish. Is there a way to dump and restore the index itself so that the tables do not need to be reindexed? The dumps and restores are between identical installations of postgres, sometimes even the same running instance of postgres. And a tangentially related question: 2) Is there a way to rename a database? That is the other thing that I occasionally need to do, and the only way I have found is dump/restore, and again we pay the penalty of the 12-hour reindexing. Any pointers? Thanks! yuji Yuji Shinozaki Computer Systems Senior Engineer [EMAIL PROTECTED] Advanced Technologies Group (434)924-7171 Information Technology Communication http://www.people.virginia.edu/~ys2nUniversity of Virginia ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [ADMIN] Backup routine
On Monday, August 11, 2003 17:26, [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] wrote: From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Monday, August 11, 2003 17:26 To: [EMAIL PROTECTED] Subject: Re: [ADMIN] Backup routine In article [EMAIL PROTECTED], Bruce Momjian [EMAIL PROTECTED] wrote: Also, I assume you have to stop the server just for a moment while you do the freeze, right? It depends on if you need known state or just consistent state. Taking a snapshot of the system will get you a consistent state just like if the machine crashed. You can restore that snapshot, bring PG back up and everything will work. Of course, you really have no way of knowing what transactions were commited and what were not. On the other hand, stop the server/snapshot/start the server gives you not only consistency, but a known state. That is, you know for sure that whatever was done before you stopped the server is what was done. But these considerations apply to pg_dump-s as well, no? I guess with pg_dump you CAN dump one database at a time, and you can quiesce each database before dumping -- disallow connections to that database for the duration of the pg_dump, and wait for all transactions to complete before starting pg_dump -- which is a little more flexible. Given the time it takes to do a pg_dump on databases over a few gigabytes in size, though, I can't say I find the flexibility valuable. Cheers, Murthy ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] readline missing on linux?
Bruce Momjian [EMAIL PROTECTED] writes: Just a question --- if you link readline against -ltermcap when the library is created, you don't need to add -ltermcap when you link apps to readline, right? You somehow bind the dependency in to the shared library, right? It could be done that way, but apparently is not on many platforms. What configure is testing for is to see whether -lreadline links successfully without a following -ltermcap. It doesn't. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] Installation problem with libreadline
Hello, How do I go about resolving the undefined ncurses references in libreadline? I have Mandrake Linux. Did you install the readline-devel? Daniel ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] locale problems
Hornyak Laszlo writes: I made a failure. I changed operating system under my database from Red Hat 8.0 to Debian 3.0 (Woody), without pg_dump-ing the database. The database version haven`t changed. Now I cannot start my database on the same data directory, it stops with the following failure: Failed to initialize lc_messages to '' FATAL: invalid value for option 'LC_MESSAGES': 'en_US.UTF-8' pg_ctl: cannot start postmaster Examine the log output. Install the en_US.UTF-8 locale on your system. Try dpkg-reconfigure locales. -- Peter Eisentraut [EMAIL PROTECTED] ---(end of broadcast)--- TIP 8: explain analyze is your friend
RE : [ADMIN] Oracle to PostgreSQL
Marco Roda [EMAIL PROTECTED] wrote .. Hi, I need to port data from an Oracle 8.0 base to PostgreSQL 7.2. I have scripts to create the base on Oracle and PostgreSQL (it is about 40 tables), but I need to port a great amount of data to PostgreSQL. Can anybody help me? I never see any documents about migrating from Oracle8 to PostgreSQL, but I've already done this kind of migration, here's my warrior's way : - Make a SQL dump of the tables (structure + constraints + sequences + data), and try to add this dump file into a PostgreSQL database (via psql). You will probably need to do several search and replaces in the SQL code in order to get it PostgreSQL compliant. But as PostgreSQL try to follow the SQL norm, it shouldn't be very difficult. - The most difficult step is to migrate the stored procedures. The biggest work is to put PL/SQL packages into PL/PGSQL functions (Unfortunately, PostgreSQL don't manages PL/PGSQL packages, that's still in the TODO list). So your packages functions contacts.add() (for example) will have to be renamed to contacts_add(); and you will also have to patch all the clients that have to access this database. This migration may be quite long, so leave the Oracle database running. Keep a note of all the changes you've done to have your SQL dump valid for PostgreSQL, and make a script (in Perl, PHP, or shell,...). When everything is OK, stop writes to the Oracle DB (allows only select queries), do the SQL dump (with the latests datas), execute your script on the sql DUMP, do the migration and switch the users to the new database. (Be sure to have the new clients ready before doing the big jump). The job of migrating from Oracle to PostgreSQL is a serious job (and sometimes difficult) but the migration wins are really interesting. I Hope this will help and that you will understand my english. :-) Regards, -- --- Bruno BAGUETTE - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [ADMIN] syslog enabled causes random hangs?
A checkpoint would also have reason to wait for a page-level lock, if the stuck backend was holding one. I am wondering though if the stuck condition consistently happens while trying to fire a trigger? That would be very interesting ... not sure what it'd mean though ... Hmm. I'm really at a loss as to how I could test this theory. I don't have a truly reproducible test case for any of these syslog problems, just one set of code with really bad luck one afternoon. Race conditions (or things that smell a lot like them) stink. It looks to me like your plpython code is all dead in the water, seeing that your Python installation is refusing creation of rexec. (AFAIK the only workaround is to downgrade Python to a version that allows rexec.) If you're using it all over the place, how come you haven't noticed that?? I did notice, and it was an oversight. I had just rebuilt Python 2.2.3 (for unrelated reasons) and forgot to comment out the poision line in rexec.py where they raise the stop using RExec exception. It behaves properly once that line is commented out (properly being it works same as with earlier versions of Python; it appears all that changed was the insertion of the exception at the beginning of the RExec constructor). I tried to get the deadlock behavior again after fixing rexec.py, but my luck wasn't bad enough for another three runs, so I posted the case I had traces for. An idea just popped into my head, though. Perhaps I can create procedures in plpgsql and plpython which do nothing but spew notices (which would in turn be sent to syslog), and run one or two copies to see if they'll die on their own given sufficient time. That seems worthwhile, especially if I can get a deadlock in plpgsql, since it will take the blame away from both triggers and plpython. Does this sound like a reasonable experiment? I may try setting this up on my home machine tonight to run for a few days. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[ADMIN] Postgresql slow on XEON 2.4ghz/1gb ram
Hello. I have this problem:i'm runningthe postgre 7.3on awindows 2000 serverwith P3 1GHZ DUAL/1gb ramwithgood performance.For bestperformancei have change the server for a XEON 2.4/1gb ram andformy suprisethe performance decrease 80%.anybody have a similar experience?does exist anyspecial configuration to postgre running on a Xeonprocessor? Any have any idea to help-me? Excuse-me my bad english. Very Thanks Wilson icq 77032308 msn [EMAIL PROTECTED]
Re: [ADMIN] ext3 block size
Wilson A. Galafassi Jr. wrote: hello. my database size is 5GB. what is the block size recommend? thanks wilson Hmm, IMHO this depends more on your IO System rather than on the DB size. It doesnt make sense to have a blocksize which your IO System (HD, Controller...) cannot fetch in one read. And it also depends on the type of Processing you are doing. If you will have mostly big tablescans, a big blocksize is good, whereas if you will be doing lots of Index acess, a lower one can make more sense. (And also if you store lots of data in one row). In general I guess 8K (the default) is ok, but 16K can make sense. Regards, Dani ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] Backup routine
On Friday 08 August 2003 18:59, Enio Schutt Junior wrote: Hi Here, where I work, the backups of the postgresql databases are being done the following way: There is a daily copy of nearly all the hd (excluding /tmp, /proc, /dev and so on) in which databases are and besides this there is also one script which makes the pg_dump of each one of the databases on the server. This daily copy of the hd is made with postmaster being active (without stopping the daemon), so the data from /usr/local/pgsql/data would not be 100% consistent, I guess. Supposing there was a failure and it was needed to restore the whole thing, I think the procedure to recovery would be the following: 1) Copy data from the backup hd to a new hd 2) Once this was done, delete the postmaster.pid file and start the postmaster service 3) Drop all databases and recreate them from those pg_dump files There are some questions I have about this backup routine: If I recover data from that inconsistent backup hd, I know that the binary files (psql, pg_dump and so on) will remain ok. The data may have some inconsistencies. Would these inconsistencies let the postmaster start and work properly (that is, even with the possible presence of inconsistent data) Would it start and be able to work normally and keep the information about users and groups? I am talking about users and groups information because these ones are not dumped by pg_dump. I was thinking about using pg_dump -g to generate this information. I was also thinking about excluding /usr/local/pgsql/data from the backup routine, as the data is also in other files generated by pg_dump. The problem is that this directory has not only the databases data but also some config files, like postgresql.conf. In most cases so long as you have not actually had a hard disk failure. When postgres comes up after it has crashed, it uses its WAL logs to clean up the database. So If your machine crashes with out hard disk failure you database should come up fine. If you have had a disk failure. The problem is that the beginning of the database may be backed up before the end. So you will need to use the dump. ie WAL logs may not agree with database and Table A not agree with Table B etc. Hence the general advise is bring it up from dump. The database will either come up so you can drop the databases or failure badly in which case you can always work out whats gone wrong and work round it. However there is a third way. That should be safe but some people may disagree with me! If you can freeze the disk while you take the backup. The backup can be used as if the computer had crashed with no hard disk failure at all. Ie WAL will be consistant and database may take longer but once it is up it will be safe (like paragaph 1). Now freezeing a disk for backup is not that difficult. You should be doing it anyway for user file consistancy. (You don't want the first 30 pages of you document to disagree with the end because somone was saving it during the backup! Freezing a disk can be done with LVM for example. http://www.tldp.org/HOWTO/LVM-HOWTO/ and is not difficult. RAID can also be used to avoid disk failure in the first place. Always however keep a dump as the dump will be your only way out if your basic sanity checks fail! Peter Childs ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [ADMIN] syslog enabled causes random hangs?
Arthur Ward [EMAIL PROTECTED] writes: I'm back with more on the funky glibc-syslog-Postgres deadlocking behavior: It looks to me like the guy doing VACUUM is simply waiting for the other guy to release a page-level lock. The other guy is running a deferred trigger and so I'd expect him to be holding one or two page-level locks, on the page or pages containing the tuple or tuples passed to the trigger. Nothing evidently wrong there. The real question is why does vsyslog() have anything to block on, when it's running in an unthreaded process? Seeing that you are using plpython, I wonder if Python is confusing matters somehow. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] syslog enabled causes random hangs?
Arthur Ward [EMAIL PROTECTED] writes: An idea just popped into my head, though. Perhaps I can create procedures in plpgsql and plpython which do nothing but spew notices (which would in turn be sent to syslog), and run one or two copies to see if they'll die on their own given sufficient time. That seems worthwhile, especially if I can get a deadlock in plpgsql, since it will take the blame away from both triggers and plpython. Does this sound like a reasonable experiment? Sure. Note it seems possible that the failure occurs only during a particular backend's first or second attempt to send something to syslog (since the first attempt would do openlog). So you should probably run a test that involves sending only a few messages per session, rather than vast numbers of messages from one session. regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[pgadmin-support] Testing gateway
In theory, the news2mail gateway is back in place ... ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[ADMIN] I dont´t see database tables
I could connect to my posgres server and select any database but i could see my tables What´s wrong?? RGDS
Re: [ADMIN] readline missing on linux
Uh, I am not sure. I would re-install readline and see if it picks up those libraries this time. What failures are you seeing, exactly? Oh, I remember, it is linking failures. You can fix it manually if you add -ltermcap to the link line, and you can even have configure add it using one of its flags. --- Randolph Jones wrote: From: Bruce Momjian It appears your readline needs -ltermcap, -lterminfo, or -lcurses to link properly. I think this happens when the readline library is not linked against those when it is created. --- Randolph Jones wrote: I am getting configure failures due to missing readline. I have redhat 9, 2.4.21, with readline and readline-devel 2.4.2 installed. I Thanks for your reply. I have termcap and ncurses installed. Do I need to install terminfo? How do I link them to readline? TIA rfjones ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] syslog enabled causes random hangs?
It looks to me like the guy doing VACUUM is simply waiting for the other guy to release a page-level lock. The other guy is running a deferred trigger and so I'd expect him to be holding one or two page-level locks, on the page or pages containing the tuple or tuples passed to the trigger. Nothing evidently wrong there. If I remember what I was working on the other day when this whole thing started, I think it was a single backend and a checkpoint that collided. I'll trace that combination, assuming it happens again. The real question is why does vsyslog() have anything to block on, when it's running in an unthreaded process? Seeing that you are using plpython, I wonder if Python is confusing matters somehow. Oof. I'm using plpython all over the place; I don't think this problem has happened in any location that can work without it easily. :-/ ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[ADMIN] locale problems
Hi all! I made a failure. I changed operating system under my database from Red Hat 8.0 to Debian 3.0 (Woody), without pg_dump-ing the database. The database version haven`t changed. Now I cannot start my database on the same data directory, it stops with the following failure: Failed to initialize lc_messages to '' FATAL: invalid value for option 'LC_MESSAGES': 'en_US.UTF-8' pg_ctl: cannot start postmaster Examine the log output. I tryed with modifying the postgresql.conf with a number of different locale settings but they did not work. What is the shortest way to get my database back? Should I install RedHat back? Thanks, Laszlo ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [ADMIN] readline missing on linux?
It appears your readline needs -ltermcap, -lterminfo, or -lcurses to link properly. I think this happens when the readline library is not linked against those when it is created. --- Randolph Jones wrote: I am getting configure failures due to missing readline. I have redhat 9, 2.4.21, with readline and readline-devel 2.4.2 installed. I am trying to build 7.3.3 Below is part of my config.log Any help greatly appreciated TIA rfjones } configure:5786: result: no configure:5800: checking for readline configure:5837: gcc -o conftest -O2 conftest.c -lreadline -lcrypt -lresolv -lnsl -ldl -lm -lbsd 5 /usr/lib/gcc-lib/i386-redhat-linux/3.2.2/../../../libreadline.so: undefined reference to `tgetnum' /usr/lib/gcc-lib/i386-redhat-linux/3.2.2/../../../libreadline.so: undefined reference to `tgoto' /usr/lib/gcc-lib/i386-redhat-linux/3.2.2/../../../libreadline.so: undefined reference to `tgetflag' /usr/lib/gcc-lib/i386-redhat-linux/3.2.2/../../../libreadline.so: undefined reference to `BC' /usr/lib/gcc-lib/i386-redhat-linux/3.2.2/../../../libreadline.so: undefined reference to `tputs' /usr/lib/gcc-lib/i386-redhat-linux/3.2.2/../../../libreadline.so: undefined reference to `PC' /usr/lib/gcc-lib/i386-redhat-linux/3.2.2/../../../libreadline.so: undefined reference to `tgetent' /usr/lib/gcc-lib/i386-redhat-linux/3.2.2/../../../libreadline.so: undefined reference to `UP' /usr/lib/gcc-lib/i386-redhat-linux/3.2.2/../../../libreadline.so: undefined reference to `tgetstr' collect2: ld returned 1 exit status configure:5840: $? = 1 configure: failed program was: #line 5811 configure #include confdefs.h /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern C #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char readline (); #ifdef F77_DUMMY_MAIN # ifdef __cplusplus extern C # endif int F77_DUMMY_MAIN() { return 1; } #endif int main () { readline (); ; return 0; } configure:5837: gcc -o conftest -O2 conftest.c -ledit -lcrypt -lresolv -lnsl -ldl -lm -lbsd 5 /usr/bin/ld: cannot find -ledit collect2: ld returned 1 exit status configure:5840: $? = 1 configure: failed program was: #line 5811 configure #include confdefs.h ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] readline missing on linux?
In article [EMAIL PROTECTED], Tom Lane [EMAIL PROTECTED] wrote: Bruce Momjian [EMAIL PROTECTED] writes: Just a question --- if you link readline against -ltermcap when the library is created, you don't need to add -ltermcap when you link apps to readline, right? You somehow bind the dependency in to the shared library, right? It could be done that way, but apparently is not on many platforms. What configure is testing for is to see whether -lreadline links successfully without a following -ltermcap. It doesn't. You also then create a difference between static and dynamic linking which a lot of people, such as myself, hate. -lreadline for dynamic linking but -lreadline -ltermcap for static linking? No thanks; I'd rather have consistency. mrc -- Mike Castle [EMAIL PROTECTED] www.netcom.com/~dalgoda/ We are all of us living in the shadow of Manhattan. -- Watchmen fatal (You are in a maze of twisty compiler features, all different); -- gcc ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] readline missing on linux
From: Bruce Momjian It appears your readline needs -ltermcap, -lterminfo, or -lcurses to link properly. I think this happens when the readline library is not linked against those when it is created. --- Randolph Jones wrote: I am getting configure failures due to missing readline. I have redhat 9, 2.4.21, with readline and readline-devel 2.4.2 installed. I Thanks for your reply. I have termcap and ncurses installed. Do I need to install terminfo? How do I link them to readline? TIA rfjones ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] Cost estimates consistently too high - does it matter?
Matt Clark [EMAIL PROTECTED] writes: I've noticed that the cost estimates for a lot of my queries are consistently far to high. You seem to be under a misapprehension. The cost estimates are not in units of milliseconds, they are on an arbitrary scale with 1.0 defined as one disk fetch. LIMIT throws another monkey wrench into the mix: the estimates for the plan nodes underneath the limit are done as if the plan were to be executed to completion, which of course it won't be. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] Postgresql slow on XEON 2.4ghz/1gb ram
Deacitvate the HyperThreading and it should run like a young cat *g* -- postgreSQL on Netware - the red elephanthttp://postgresql.dseichter.org "Wilson A. Galafassi Jr." [EMAIL PROTECTED] schrieb im Newsbeitrag news:[EMAIL PROTECTED]... Hello. I have this problem:i'm runningthe postgre 7.3on awindows 2000 serverwith P3 1GHZ DUAL/1gb ramwithgood performance.For bestperformancei have change the server for a XEON 2.4/1gb ram andformy suprisethe performance decrease 80%.anybody have a similar experience?does exist anyspecial configuration to postgre running on a Xeonprocessor? Any have any idea to help-me? Excuse-me my bad english. Very Thanks Wilson icq 77032308 msn [EMAIL PROTECTED]
Re: [ADMIN] syslog enabled causes random hangs?
I'm back with more on the funky glibc-syslog-Postgres deadlocking behavior: It's really too bad that your gdb backtrace didn't show anything past the write_syslog level (which is an elog subroutine). If we could see whether the elog had been issued from a signal handler, and if so what it had interrupted, we'd have an idea whether this is a known syslog deficiency or not. Do you want to rebuild with debug symbols and try the backtrace again? A minor ah-ha here: Whoever wrote the original Gentoo build for Postgres left out --enable-debug, even though Gentoo has a system-wide flag for no, I don't want symbols (which I did not set). I was testing a different piece of my app at home, and triggered the deadlock by accident. Being frustrated, I took the time to rebuild PG with symbols, and also rebuilt Python (for other reasons), which accounts for the RExec exception in the traceback here. The exception normally comes back to the client with no problems. I had forgotten to comment out the poison line in Python 2.2.3's RExec module so my plpython code would continue working. This is Postgresql 7.3.4 running against glibc 2.3.2. It just occured to me that I didn't trace the postmaster or the other always-on processes, just the two dead backends. In a couple of attempts to recreate the problem again, I had no failures. :-( The original problem was on a different machine running a different part of my project, and seemed more reproducible, or I could have just had a bad day, so I can go back to that at some point if tracing the other processes is a good idea. postgres 27259 3.2 6.4 47488 20600 pts/0 S03:30 0:05 postgres: postgres Trucking [local] VACUUM (gdb) bt #0 0x4dbf81a7 in semop () from /lib/libc.so.6 #1 0x080f2dd7 in PGSemaphoreLock (sema=0x1, interruptOK=1 '\001') at pg_sema.c:434 #2 0x08106dd3 in ProcWaitForSignal () at proc.c:895 #3 0x08101878 in LockBufferForCleanup (buffer=1021) at bufmgr.c:1946 #4 0x080c380b in lazy_vacuum_heap (onerel=0x420a20f8, vacrelstats=0x8288f88) at vacuumlazy.c:474 #5 0x080c3476 in lazy_scan_heap (onerel=0x420a20f8, vacrelstats=0x8288f88, Irel=0x8297818, nindexes=3) at vacuumlazy.c:426 #6 0x080c31e2 in lazy_vacuum_rel (onerel=0x21, vacstmt=0x1) at vacuumlazy.c:158 #7 0x080c02ee in vacuum_rel (relid=1021, vacstmt=0x8287138, expected_relkind=114 'r') at vacuum.c:829 #8 0x080bfd32 in vacuum (vacstmt=0x8287138) at vacuum.c:290 #9 0x0810a3a1 in pg_exec_query_string (query_string=0x8287138, dest=Remote, parse_context=0x827bb30) at postgres.c:789 #10 0x0810afdf in PostgresMain (argc=4, argv=0xbfff7cc8, username=0x822e1d9 postgres) at postgres.c:2013 #11 0x080f5175 in DoBackend (port=0x822e0a8) at postmaster.c:2310 #12 0x080f4d2f in BackendStartup (port=0x822e0a8) at postmaster.c:1932 #13 0x080f3f95 in ServerLoop () at postmaster.c:1009 #14 0x080f3989 in PostmasterMain (argc=1, argv=0x82162e0) at postmaster.c:788 #15 0x080d684c in main (argc=1, argv=0xbfff85c4) at main.c:210 #16 0x4db327a7 in __libc_start_main () from /lib/libc.so.6 postgres 27235 5.7 3.0 41900 9784 pts/0S03:29 0:15 postgres: award Trucking [local] INSERT (gdb) bt #0 0x4db45ed6 in sigsuspend () from /lib/libc.so.6 #1 0x4de40218 in __pthread_wait_for_restart_signal () from /lib/libpthread.so.0 #2 0x4de419a0 in __pthread_alt_lock () from /lib/libpthread.so.0 #3 0x4de3ec17 in pthread_mutex_lock () from /lib/libpthread.so.0 #4 0x4dbf334c in vsyslog () from /lib/libc.so.6 #5 0x4dbf2ebf in syslog () from /lib/libc.so.6 #6 0x0814a49b in write_syslog (level=4, line=0xbfff70fc ERROR: plpython: Unable to create rexec.RExec instance\nexceptions.RuntimeError: This code is not secure in Python 2.2 and 2.3) at elog.c:728 #7 0x08149bf9 in elog (lev=-1073779172, fmt=0x421a44fa plpython: %s\n%s) at elog.c:383 #8 0x4214b496 in PLy_elog (elevel=20, fmt=0x421a4a80 Unable to create rexec.RExec instance) at plpython.c:2811 #9 0x421490ff in PLy_procedure_compile (proc=0x83abdb8, src=0x8359610 \n# TD[\args\][0] - audit_table (with schema) \n# TD[\relid\] = OID of the table being triggered on\n\nif not SD.has_key(\plans\):\n\tSD[\plans\] = {}\n\nif SD[\plans\].has_key(TD[\relid\]):\n\tcachedPlans = SD...) at plpython.c:1203 #10 0x42148f32 in PLy_procedure_create (fcinfo=0xbfff78c0, is_trigger=1 '\001', procTup=0x420b1910, key=0xbfff7730 5210458_trigger) at plpython.c:1177 #11 0x42148ba8 in PLy_procedure_get (fcinfo=0xbfff78c0, is_trigger=1 '\001') at plpython.c:1043 #12 0x4214758e in plpython_call_handler (fcinfo=0xbfff78c0) at plpython.c:423 #13 0x0814b4cc in fmgr_security_definer (fcinfo=0xbfff78c0) at fmgr.c:681 #14 0x080bbb8f in ExecCallTriggerFunc (trigdata=0xbfff79e8, finfo=0x82aff98, per_tuple_context=0x827c9d0) at trigger.c:1121 #15 0x080bc39a in DeferredTriggerExecute (event=0x82c2ba8, itemno=-4, rel=0x4f815b, trigdesc=0x828bff8, finfo=0x82afcf0, per_tuple_context=0x827c9d0) at trigger.c:1656 #16 0x080bc685 in deferredTriggerInvokeEvents (immediate_only=1
Re: [ADMIN] readline missing in linux
Thanks to bruce momjian and tom lane, I installed ncurses-devel and all is well. Thanks so much rfjones ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[ADMIN] dbmirror and failure/recovery modes
Dear peoples, I'm setting up a database in postgres which will have light traffic writing to a few tables (and somewhat heavier traffic reading them). We'd like to mirror that databse using dbmirror to two others. Let's assume that we're humming along and the master goes down. We switch traffic to read from one of the two slaves. What happens if we feel a need to add information while the master is down ? Reconfigure that former slave to be a master and then manually reconcile the former master when it is available, perhaps ? I didn't see anything in the documentation or in the postgres site, so I apologize if I am issuing a faq. Thanks, Greg Williamson DBA GlobeXplorer LLC ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] Backup routine
The world rejoiced as [EMAIL PROTECTED] (Peter and Sarah Childs) wrote: However there is a third way. That should be safe but some people may disagree with me! If you can freeze the disk while you take the backup. The backup can be used as if the computer had crashed with no hard disk failure at all. Ie WAL will be consistant and database may take longer but once it is up it will be safe (like paragaph 1). Now freezeing a disk for backup is not that difficult. You should be doing it anyway for user file consistancy. (You don't want the first 30 pages of you document to disagree with the end because somone was saving it during the backup! I heard D'Arcy Cain indicate that some SAN systems (I think he mentioned NetApp) support this sort of thing, too. Digital's AdvFS also supports it. Of course, if you take this approach, you have to make _certain_ that when you freeze a replica of a filesystem, that _ALL_ of the database is contained in that one filesystem. If you move WAL to a different filesystem, bets would be off again... -- let name=cbbrowne and tld=ntlug.org in String.concat @ [name;tld];; http://www3.sympatico.ca/cbbrowne/ The real problem with the the year 2000 is that there are too many zero bits and that adversely affects the global bit density. -- Boyd Roberts [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [ADMIN] Oracle to PostgreSQL
On Wed, 6 Aug 2003, Marco Roda wrote: Hi, I need to port data from an Oracle 8.0 base to PostgreSQL 7.2. I have scripts to create the base on Oracle and PostgreSQL (it is about 40 tables), but I need to port a great amount of data to PostgreSQL. Can anybody help me? There's a script in the contrib directory called something like ora2pg that's been helpful to some folks. EMS makes a migration program http://www.ems-hitech.com/pgsqlutils that might help too. Haven't used it, but it's only $39 and I think they have trial versions available. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[ADMIN] Two-fase-commit
Hi! I have a customer asking to me: is possible one app whit two-fase-commit transactions using posgresql+informix? Well, I was searched in the web and the postgresql web site, but I wasn't found nothing about. I hope this is the correct list to post this ask, and, of course, someone to help me ;). thks ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] pg_autovacuum ?
On Mon, 2003-08-04 at 10:15, Mendola Gaetano wrote: Hi, I used to install postgres 7.3.3 from RPM, today I needed to take a look at pg_utovacuum but I didn't find it ? Should I download it somewhere else ? you'd have to check it out from CVS and then compile it into a 7.3 cvs pull. It's on the 7.4 branch, not on the 7.3 branch, but the authors mentioned that at least initially it would run against 7.3. Robert Treat -- Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [ADMIN] Backup routine
There's an extensive discussion of doing file system level backups off an LVM snapshot taken on a frozen XFS filesystem (XFS provides xfs_freeze command which allows the sysadmin to freeze and unfreeze a given file system at will). See http://marc.theaimsgroup.com/?l=postgresql-adminw=2r=1s=LVM+snapshotsq=b (as Mike Castle points out in this thread, this gives you a consistent state but not a know one). If your hardware gives you the ability to freeze I/O, you might get similar capabilities with other filesystems/OSes than XFS on Linux. In that thread, I also discussed a strategy for getting consistent backups when the $PGDATA/pg_xlog/ folder is on a different filesystem: freeze $PGDATA/pg_xlog/ freeze $PGDATA create snapshot $PGDATA/pg_xlog/ and mount that to $PGDATA.snap/pg_xlog\ create snapshot $PGDATA and mount that to $PGDATA.snap unfreeze $PGDATA unfreeze $PGDATA/pg_xlog/ (notice that since you only need to freeze the filesystem during snapshot creation, which is fast, you don't hold the postmaster up very long) rsync $PGDATA.snap/ to a backup volume umount and lvremove the snapshot of $PGDATA/pg_xlog/ umount and lvremove the snapshot of $PGDATA/ Filesystem performance is adversely affected as long as the snapshots exist; so I do a preliminary rsync (giving an inconsistent backup) at the beginning of the sequence to minimize the amount of data to be backed up from the snapshot(s). An example bash script which handles $PGDATA/pg_xlog/ being on a different filesystem (and mounted, not symlinked, to $PGDATA/pg_xlog) is attached. It has been tested under a fairly heavy load and seems to handle the job. I'd be interested in comments on the strategy/implementation from the postgresql gurus on this list (the script is rather messy, sorry). Thanks, Murthy -Original Message- From: Bruce Momjian [mailto:[EMAIL PROTECTED] Sent: Sunday, August 10, 2003 23:13 To: Christopher Browne Cc: [EMAIL PROTECTED] Subject: Re: [ADMIN] Backup routine Christopher Browne wrote: The world rejoiced as [EMAIL PROTECTED] (Peter and Sarah Childs) wrote: However there is a third way. That should be safe but some people may disagree with me! If you can freeze the disk while you take the backup. The backup can be used as if the computer had crashed with no hard disk failure at all. Ie WAL will be consistant and database may take longer but once it is up it will be safe (like paragaph 1). Now freezeing a disk for backup is not that difficult. You should be doing it anyway for user file consistancy. (You don't want the first 30 pages of you document to disagree with the end because somone was saving it during the backup! I heard D'Arcy Cain indicate that some SAN systems (I think he mentioned NetApp) support this sort of thing, too. Digital's AdvFS also supports it. Of course, if you take this approach, you have to make _certain_ that when you freeze a replica of a filesystem, that _ALL_ of the database is contained in that one filesystem. If you move WAL to a different filesystem, bets would be off again... Also, I assume you have to stop the server just for a moment while you do the freeze, right? -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) pgSnapBack2.sh.discuss Description: Binary data ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] counting rows
nice stuff (the whole presentation, full of stuff i did not know), thanks. of course, a question... the bit that reads: If you know the maximum value of an integer column and have an index on it: SELECT COUNT(*) FROM t2 WHERE c2 100; made me think that the above would do an index scan vice a table scan. but when i try it with my table (currently has almost 70,000,000 rows) it still does a table scan (at least, EXPLAIN says it will do a table scan) ... well, sometimes. the table has two integer fields, SECONDS and MICROS, and i have created an index on (SECONDS, MICROS). ANALYZEd the table so the index would be considered useful, tested that pg was happy by running: EXPLAIN SELECT count(*) FROM tess WHERE SECONDS 100; sure enough, query plan says it'll use an index scan. but... EXPLAIN SELECT count(*) from tess where SECONDS 11; query plan here says it will use a table scan. the cutoff, the highest value at which it will still do an index scan, is 1060183451. given that this value is typically gotten from the system call gettimeofday and the current maximum value is 1060188816 (about noon today), this is, er, problematic. any ideas? thanks. pg On Wed, 6 Aug 2003 [EMAIL PROTECTED] wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 i need to get a row count on a large table. it appears that select count(*) from table; always does a table scan, whether i have an index or not, and that is too slow. http://www.gtsm.com/oscon2003/findrows.html - -- Greg Sabino Mullane [EMAIL PROTECTED] PGP Key: 0x14964AC8 200308060951 -BEGIN PGP SIGNATURE- Comment: http://www.turnstep.com/pgp.html iD8DBQE/MQgmvJuQZxSWSsgRAkFDAJwPz5C7OljCg4TYqxvw4/WjLr8TlgCgovtS xQPuL/l+QCPgmwVDeX0yocM= =XtnA -END PGP SIGNATURE- ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [ADMIN] Installation problem with libreadline
On Sat, 2003-08-09 at 08:17, Daniel Seichter wrote: Hello, How do I go about resolving the undefined ncurses references in libreadline? Yeah, that's the thing that fails when I run ./config of the postgres7.3.3 build under Mandrake 7.3 . ld checks libreadline.so for various calls and then bails. libreadline is a symlink to the lastest version of libreadline which appears to have dropped ncurses from the build for some reason. (proprietary code in ncurses...???) I have Mandrake Linux. Did you install the readline-devel? I guess I could do that. I'll try it today and let the list know how it goes. Thanks for your help, Peter ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[ADMIN] Installation problem with libreadline
Hello, I have seen a number of links via google describing build problems (./configure) with postgres and libreadline.so . How do I go about resolving the undefined ncurses references in libreadline? I have Mandrake Linux. Thanks for the help Peter ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
RE : [ADMIN] Oracle to PostgreSQL
hi, At 10:25 07.08.2003 +0200, Bruno BAGUETTE wrote: Marco Roda [EMAIL PROTECTED] wrote .. Hi, I need to port data from an Oracle 8.0 base to PostgreSQL 7.2. I have scripts to create the base on Oracle and PostgreSQL (it is about 40 tables), but I need to port a great amount of data to PostgreSQL. Can anybody help me? I never see any documents about migrating from Oracle8 to PostgreSQL, but I've already done this kind of migration, here's my warrior's way : - Make a SQL dump of the tables (structure + constraints + sequences + data), and try to add this dump file into a PostgreSQL database (via psql). You will probably need to do several search and replaces in the SQL code in order to get it PostgreSQL compliant. But as PostgreSQL try to follow the SQL norm, it shouldn't be very difficult. I have good experiences with: http://www.samse.fr/GPL/ora2pg/ora2pg.html Rafal ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [ADMIN] copying databases w/ indexes
On Thu, 14 Aug 2003, Yuji Shinozaki wrote: I have two questions about restoring and moving databases (in pg 7.3), to which I have not been able to find a suitable answers in the documentation (and searching the list archives has been painfully slow and fruitless). 1) What is the fastest way to restore a database *with indexes*? We have situation where I need to make a duplicate of a production database for development work, so that we can develop code without interfering with the production database. I use pg_dump for this, but I have not been able to get pg_restore to work (usually because of constraint problems). So I do the restores from a text dump piped into psql (with all the constraints applied last). Then the problem is that reindexing takes a very long time. One particular index is taking on the order of 12 hours to finish. I've been dumping and restoring a very very big database, with indexes, with no such problems. The table details that get dumped setup the indexes, and AIUI the indexes are kept up-to-date as items are added. -- Sam Barnett-Cormack Software Developer | Student of Physics Maths UK Mirror Service (http://www.mirror.ac.uk) | Lancaster University ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] copying databases w/ indexes
Aha! Thanks for the reply, because it got me thinking, and I reexamined the schema and discovered that the problem was that we had a foreign key constraint involving two columns of different type (varchar vs. integer!). [Yeesh.] I think the type conversion was what was making reindexing (not to mention inserts) slow! Now the restore only takes about 20 minutes (yay!) Thanks, yuji On Thu, 14 Aug 2003, Sam Barnett-Cormack wrote: On Thu, 14 Aug 2003, Yuji Shinozaki wrote: I have two questions about restoring and moving databases (in pg 7.3), to which I have not been able to find a suitable answers in the documentation (and searching the list archives has been painfully slow and fruitless). 1) What is the fastest way to restore a database *with indexes*? We have situation where I need to make a duplicate of a production database for development work, so that we can develop code without interfering with the production database. I use pg_dump for this, but I have not been able to get pg_restore to work (usually because of constraint problems). So I do the restores from a text dump piped into psql (with all the constraints applied last). Then the problem is that reindexing takes a very long time. One particular index is taking on the order of 12 hours to finish. I've been dumping and restoring a very very big database, with indexes, with no such problems. The table details that get dumped setup the indexes, and AIUI the indexes are kept up-to-date as items are added. -- Sam Barnett-Cormack Software Developer | Student of Physics Maths UK Mirror Service (http://www.mirror.ac.uk) | Lancaster University ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] Yuji Shinozaki Computer Systems Senior Engineer [EMAIL PROTECTED] Advanced Technologies Group (434)924-7171 Information Technology Communication http://www.people.virginia.edu/~ys2nUniversity of Virginia ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] array[10]
Marco Kaerger wrote: hi there, i have got a problem concerning arrays in postgres i created a function in which i want to know if the index 8 of var_array ist not NULL: IF var_array[8] IS NOT NULL THEN (query) END IF; then i call the function and the query is submitted, allthough var_array[8] is 'NULL' From your question it appears that you think 'NULL' (with single quotes) and NULL (without single quotes) are the same thing -- they are not. The string literal 'NULL' is nothing more than a four letter string. NULL, on the other hand, is a special value meaning that the data is unknown. Currently Postgres does not even support NULL elements in arrays, although if you test for a non-existing subscript the result is NULL. Why don't you give us a complete (i.e. the entire function) example of what you are trying to do, along with the specific query you believe gives the wrong result, and then maybe we can be of more help. Joe ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[ADMIN] Cost estimates consistently too high - does it matter?
Hi, I've noticed that the cost estimates for a lot of my queries are consistently far to high. Sometimes it's because the row estimates are wrong, like this: explain analyze select logtime from loginlog where uid='Ymogen::YM_User::3e2c0869c2fdd26d8a74d218d5a6ff585d490560' and result = 'Success' order by logtime desc limit 3;NOTICE: QUERY PLAN: Limit (cost=0.00..221.85 rows=3 width=8) (actual time=0.21..2.39 rows=3 loops=1) - Index Scan Backward using loginlog_logtime_idx on loginlog (cost=0.00..12846.69 rows=174 width=8) (actual time=0.20..2.37 rows=4 loops=1)Total runtime: 2.48 msec The row estimate here is off by a factor of 50, but the cost estimate is off by a factor of5000. Sometimes the row estimates are good, but the costs are still too high: explain analyze select u.email from ym_user u join mobilepm m on (m.ownerid = u._id) where m.status = 'Validated' and m.network = 'TMOBILEUK';NOTICE: QUERY PLAN: Nested Loop (cost=0.00..2569.13 rows=441 width=145) (actual time=1.93..248.57 rows=553 loops=1) - Seq Scan on mobilepm m (cost=0.00..795.11 rows=441 width=58) (actual time=1.69..132.83 rows=553 loops=1) - Index Scan using ym_user_id_idx on ym_user u (cost=0.00..4.01 rows=1 width=87) (actual time=0.19..0.20 rows=1 loops=553)Total runtime: 249.47 msec loginlog has 18 rows, mobilepm has 12000, ym_user has 5, and they've all been analyzed prior to running the query. The server is a Quad PIII 700 Xeon/1MB cache, 3GB RAM, hardware RAID10 ontwo SCSIchannels with 128MB write-back cache. I've lowered the random_page_cost to 2 to reflect the decent disk IO, but I suppose the fact that the DB indexes are essentially all cached in RAM might also be affecting the results, although effective_cache_size is set to a realistic 262144 (2GB). Those planner params in full: #effective_cache_size = 1000 # default in 8k pages#random_page_cost = 4#cpu_tuple_cost = 0.01#cpu_index_tuple_cost = 0.001#cpu_operator_cost = 0.0025effective_cache_size = 262144 # 2GB of FS cacherandom_page_cost = 2 For now the planner seems to be making the right choices, but my concern is that at some point the planner might start making some bad decisions, especially on more complex queries. Should I bother tweaking the planner costs more, and if so which ones? Am I fretting over nothing? Cheers Matt Matt ClarkYmogen Ltd[EMAIL PROTECTED]corp.ymogen.net
Re: [ADMIN] copying databases w/ indexes
See my other reply, but the problem was that I had a foreign key constraint between an integer and varchar. (The column was a varchar and the foreign key was an integer field). Seems to me postgres shouldn't have allowed me to do this at all, but I guess it did a slow automatic type conversion. So, reindexing was deathly slow. It also explains the fact that inserts into this table were getting increasingly slow as the table grew. I manually changed the type in the dump and restoring takes about 20 minutes instead of 12 hours! Phew! yuji On Thu, 14 Aug 2003, Tom Lane wrote: Yuji Shinozaki [EMAIL PROTECTED] writes: Then the problem is that reindexing takes a very long time. One particular index is taking on the order of 12 hours to finish. Seems like a lot. What is the index definition exactly, and how many rows in the table? If it's a btree, have you tried increasing sort_mem in the restore session? regards, tom lane Yuji Shinozaki Computer Systems Senior Engineer [EMAIL PROTECTED] Advanced Technologies Group (434)924-7171 Information Technology Communication http://www.people.virginia.edu/~ys2nUniversity of Virginia ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] Two-fase-commit
[EMAIL PROTECTED] (Bruce Momjian) writes: No, I don't think that is possible. We don't even have 2-phase commit between PostgreSQL backends. --- Wagner Sales wrote: Hi! I have a customer asking to me: is possible one app whit two-fase-commit transactions using posgresql+informix? Well, I was searched in the web and the postgresql web site, but I wasn't found nothing about. I hope this is the correct list to post this ask, and, of course, someone to help me ;). It's surely possible, but as Postgres isn't a transactionnal monitor, It could not handle 2-phase commit alone. With a TM like Tuxedo, you (well, he) surely can do this. [...] -- Aurélien DEHAY http://logicielslibres.info ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] Backup routine
Bruce Momjian [EMAIL PROTECTED] writes: Tom Lane wrote: Yeah. Instead, exclude the directories below it ($PGDATA/base, etc). Yes, but if he restores everything but /base, he gets xlog and clog, and other stuff he shouldn't be restoring. That's why I said etc. Only the loose files in the top-level directory are interesting. regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] Transactions, tuples, and VACUUM
Matt Clark [EMAIL PROTECTED] writes: consider: begin; update a set col1 = 'p' where id = '1'; update a set col2 = 'q' where id = '1'; commit; versus: update a set col1 = 'p', col2 = 'q' where id = '1'; Does the first case generate any more dead tuples that will need vacuuming than the second case, Yes. Takes twice as long, too. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[ADMIN] Database in postgresql
hello sir, i am small programmed in visual basic and database as postgresql. Recentrly i have some problem please help me for the following Can i possible touse two postgresql database files in linux with multiple harddisk. If so please help me to do Do you Yahoo!? Yahoo! SiteBuilder - Free, easy-to-use web site design software
[ADMIN] Problems after upgrade postgresql 7.2.1 - 7.34
Hi, I upgraded postgresql 7.2.1 in to 7.3.4 New postgresql works fine. But after upgrade, zope can't connect to the postgresql. Zopeuses "Z PoPy DataBase Adaptor" Zope shows error: "Invalid connection string". Maybe somebody knows - what is the problem ? Thanks [AU]
Re: [ADMIN] Two-fase-commit
No, I don't think that is possible. We don't even have 2-phase commit between PostgreSQL backends. --- Wagner Sales wrote: Hi! I have a customer asking to me: is possible one app whit two-fase-commit transactions using posgresql+informix? Well, I was searched in the web and the postgresql web site, but I wasn't found nothing about. I hope this is the correct list to post this ask, and, of course, someone to help me ;). thks ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[ADMIN] Transactions, tuples, and VACUUM
Morning all, bit of a general question here... consider: begin; update a set col1 = 'p' where id = '1'; update a set col2 = 'q' where id = '1'; commit; versus: update a set col1 = 'p', col2 = 'q' where id = '1'; Does the first case generate any more dead tuples that will need vacuuming than the second case, or are the two updates 'merged' when the transaction is committed? Or is the answer 'sometimes' (I would guess depending on checkpoints)? Cheers Matt ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[ADMIN] help me- postgres error - error while loading shared libraries - libpq.so.3
sir, i have installed postgresql 7.1.3 in redhat8 O.S i can start and stop the postmaster, but i can not create a DB using createdb command.it is showing error error while loading shared libraries - libpq.so.3 please help me. Thank you Johnson __ Do you Yahoo!? Yahoo! SiteBuilder - Free, easy-to-use web site design software http://sitebuilder.yahoo.com ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[ADMIN] Hi Sir
Title: Hi Sir hi Sir, There is some problem with some program face me especially in Java Applet. So, can you help me in this problem? please if youcan help me with any solve or web site help me in solve it send it to my e-mail [EMAIL PROTECTED]
Re: [ADMIN] Script to recreate indexes?
Bruce Momjian wrote: Have you looked in the src/contrib/reindex directory? Ahem, nope I didn´t have, until you told me so! Its exactly what I needed. Cheers, Dani ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] Backup routine
In article [EMAIL PROTECTED], Bruce Momjian [EMAIL PROTECTED] wrote: Also, I assume you have to stop the server just for a moment while you do the freeze, right? It depends on if you need known state or just consistent state. Taking a snapshot of the system will get you a consistent state just like if the machine crashed. You can restore that snapshot, bring PG back up and everything will work. Of course, you really have no way of knowing what transactions were commited and what were not. On the other hand, stop the server/snapshot/start the server gives you not only consistency, but a known state. That is, you know for sure that whatever was done before you stopped the server is what was done. Which approach is necessary for you depends on your needs, I as always. mrc -- Mike Castle [EMAIL PROTECTED] www.netcom.com/~dalgoda/ We are all of us living in the shadow of Manhattan. -- Watchmen fatal (You are in a maze of twisty compiler features, all different); -- gcc ---(end of broadcast)--- TIP 8: explain analyze is your friend
[ADMIN] array[10]
hi there, i have got a problem concerning arrays in postgres i created a function in which i want to know if the index 8 of var_array ist not NULL: IF var_array[8] IS NOT NULL THEN (query) END IF; then i call the function and the query is submitted, allthough var_array[8] is 'NULL' any solutions for that problem? thanx! signature.asc Description: Dies ist ein digital signierter Nachrichtenteil
Re: [ADMIN] Backup routine
Centuries ago, Nostradamus foresaw when [EMAIL PROTECTED] (Bruce Momjian) would write: Christopher Browne wrote: The world rejoiced as [EMAIL PROTECTED] (Peter and Sarah Childs) wrote: However there is a third way. That should be safe but some people may disagree with me! If you can freeze the disk while you take the backup. The backup can be used as if the computer had crashed with no hard disk failure at all. Ie WAL will be consistant and database may take longer but once it is up it will be safe (like paragaph 1). Now freezeing a disk for backup is not that difficult. You should be doing it anyway for user file consistancy. (You don't want the first 30 pages of you document to disagree with the end because somone was saving it during the backup! I heard D'Arcy Cain indicate that some SAN systems (I think he mentioned NetApp) support this sort of thing, too. Digital's AdvFS also supports it. Of course, if you take this approach, you have to make _certain_ that when you freeze a replica of a filesystem, that _ALL_ of the database is contained in that one filesystem. If you move WAL to a different filesystem, bets would be off again... Also, I assume you have to stop the server just for a moment while you do the freeze, right? I'm sure that's _preferable_. Supposing you don't, the result is that the backup will be treated much like the condition where a server is terminated by power failure, and, at restart, the system will have to rummage around the WAL to clean up a bit. Obviously not what you'd want, in an _ideal_ world, but it fits into what WAL is supposed to be able to protect against, right? If-and-when PITR gets into place, I'd think one a valued feature would be the notion of being able to signal the postmaster to tell it to close off a WAL and open a new one (even though we might not strictly be due for it). (Note that the O-guys can do something like this with their archive logs.) If that signal can be submitted, then someone might be able to take this sort of cloned filesystem backup, and just drop off the last WAL file as irrelevant. That might not be quite exactly what's imminent for 7.5, mind you... -- select 'cbbrowne' || '@' || 'acm.org'; http://www.ntlug.org/~cbbrowne/sap.html (eq? 'truth 'beauty) ; to avoid unassigned-var error, since compiled code ; will pick up previous value to var set!-ed, ; the unassigned object. -- from BBN-CL's cl-parser.scm ---(end of broadcast)--- TIP 8: explain analyze is your friend
[ADMIN] Import data from postgres 7.0 to 7.3.4
Hello, I used postgresql 7.0. I install new instalation 7.3.4. Could you tell me how I can import my base from 7.0 to 7.3.4? I used in base 7.0 pg_dumpall -o base.sql and try import to 7.3.4 (psql -U postgres -f base.sql template1). Of course I read a lot of errors; In Relase Notes from Administrator's Guide book is: .1.2. Migration to version 7.3 A dump/restore using pg_dump is required for those wishing to migrate data from any previous release. If your application examines the system catalogs, additional changes will be required due to the introduction of schemas in 7.3; for more information, see: http://www.ca.postgresql.org/docs/momjian/upgrade_tips_7.3. This link doesn't work. Thanks Luke ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] Backup routine
Oh, good point. By grabbing xlog along with the data files, you do get a consistent snapshot just like if it had crashed. --- Mike Castle wrote: In article [EMAIL PROTECTED], Bruce Momjian [EMAIL PROTECTED] wrote: Also, I assume you have to stop the server just for a moment while you do the freeze, right? It depends on if you need known state or just consistent state. Taking a snapshot of the system will get you a consistent state just like if the machine crashed. You can restore that snapshot, bring PG back up and everything will work. Of course, you really have no way of knowing what transactions were commited and what were not. On the other hand, stop the server/snapshot/start the server gives you not only consistency, but a known state. That is, you know for sure that whatever was done before you stopped the server is what was done. Which approach is necessary for you depends on your needs, I as always. mrc -- Mike Castle [EMAIL PROTECTED] www.netcom.com/~dalgoda/ We are all of us living in the shadow of Manhattan. -- Watchmen fatal (You are in a maze of twisty compiler features, all different); -- gcc ---(end of broadcast)--- TIP 8: explain analyze is your friend -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] readline missing on linux?
Bruce Momjian [EMAIL PROTECTED] writes: It appears your readline needs -ltermcap, -lterminfo, or -lcurses to link properly. I think this happens when the readline library is not linked against those when it is created. On Red Hat 8.0, readline depends on -ltermcap. I would imagine the same is true on RH9, but haven't got it installed personally. If you don't have termcap (and possibly termcap-devel, not sure if that's needed) then configure should detect the missing dependency. Maybe it did --- you didn't show us enough of config.log to see what happened. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] locale problems
Cool :) Thanks, Laszlo On Thu, 7 Aug 2003, Peter Eisentraut wrote: Hornyak Laszlo writes: I made a failure. I changed operating system under my database from Red Hat 8.0 to Debian 3.0 (Woody), without pg_dump-ing the database. The database version haven`t changed. Now I cannot start my database on the same data directory, it stops with the following failure: Failed to initialize lc_messages to '' FATAL: invalid value for option 'LC_MESSAGES': 'en_US.UTF-8' pg_ctl: cannot start postmaster Examine the log output. Install the en_US.UTF-8 locale on your system. Try dpkg-reconfigure locales. -- Peter Eisentraut [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] readline missing on linux?
Just a question --- if you link readline against -ltermcap when the library is created, you don't need to add -ltermcap when you link apps to readline, right? You somehow bind the dependency in to the shared library, right? --- Tom Lane wrote: Bruce Momjian [EMAIL PROTECTED] writes: It appears your readline needs -ltermcap, -lterminfo, or -lcurses to link properly. I think this happens when the readline library is not linked against those when it is created. On Red Hat 8.0, readline depends on -ltermcap. I would imagine the same is true on RH9, but haven't got it installed personally. If you don't have termcap (and possibly termcap-devel, not sure if that's needed) then configure should detect the missing dependency. Maybe it did --- you didn't show us enough of config.log to see what happened. regards, tom lane -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [ADMIN] Postgresql slow on XEON 2.4ghz/1gb ram
Hello, yes - I had the same problem using dual xeon machines. The problem seems to be related to the kernel that suffers from a lack of xeon cpu support. also the intel compiler seems to get much faster binaries than the gcc does. I finally skipped the whole xeon hardware for the use as postgresql database server. regards, -andreas On Wednesday 06 August 2003 15:20, Wilson A. Galafassi Jr. wrote: Hello. I have this problem: i'm running the postgre 7.3 on a windows 2000 server with P3 1GHZ DUAL/1gb ram with good performance. For best performance i have change the server for a XEON 2.4/1gb ram and for my suprise the performance decrease 80%. anybody have a similar experience? does exist any special configuration to postgre running on a Xeon processor? Any have any idea to help-me? Excuse-me my bad english. Very Thanks Wilson icq 77032308 msn [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[ADMIN] counting rows
i need to get a row count on a large table. it appears that select count(*) from table; always does a table scan, whether i have an index or not, and that is too slow. so i found this: select reltuples from pg_class where relname = 'table'; this is nice and fast, but not all that accurate at any given time. but, i *did* notice that it is pretty accurate if i run analyze just before. so, i am wondering if analyze also does a table scan, in which case i am back to where i started. it *seems* to run fairly quickly, even on a large table, but i am just wall-clocking all of this. or any other ideas on counting rows are welcome. thanks. pg ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[ADMIN] readline missing on linux?
I am getting configure failures due to missing readline. I have redhat 9, 2.4.21, with readline and readline-devel 2.4.2 installed. I am trying to build 7.3.3 Below is part of my config.log Any help greatly appreciated TIA rfjones } configure:5786: result: no configure:5800: checking for readline configure:5837: gcc -o conftest -O2 conftest.c -lreadline -lcrypt -lresolv -lnsl -ldl -lm -lbsd 5 /usr/lib/gcc-lib/i386-redhat-linux/3.2.2/../../../libreadline.so: undefined reference to `tgetnum' /usr/lib/gcc-lib/i386-redhat-linux/3.2.2/../../../libreadline.so: undefined reference to `tgoto' /usr/lib/gcc-lib/i386-redhat-linux/3.2.2/../../../libreadline.so: undefined reference to `tgetflag' /usr/lib/gcc-lib/i386-redhat-linux/3.2.2/../../../libreadline.so: undefined reference to `BC' /usr/lib/gcc-lib/i386-redhat-linux/3.2.2/../../../libreadline.so: undefined reference to `tputs' /usr/lib/gcc-lib/i386-redhat-linux/3.2.2/../../../libreadline.so: undefined reference to `PC' /usr/lib/gcc-lib/i386-redhat-linux/3.2.2/../../../libreadline.so: undefined reference to `tgetent' /usr/lib/gcc-lib/i386-redhat-linux/3.2.2/../../../libreadline.so: undefined reference to `UP' /usr/lib/gcc-lib/i386-redhat-linux/3.2.2/../../../libreadline.so: undefined reference to `tgetstr' collect2: ld returned 1 exit status configure:5840: $? = 1 configure: failed program was: #line 5811 configure #include confdefs.h /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern C #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char readline (); #ifdef F77_DUMMY_MAIN # ifdef __cplusplus extern C # endif int F77_DUMMY_MAIN() { return 1; } #endif int main () { readline (); ; return 0; } configure:5837: gcc -o conftest -O2 conftest.c -ledit -lcrypt -lresolv -lnsl -ldl -lm -lbsd 5 /usr/bin/ld: cannot find -ledit collect2: ld returned 1 exit status configure:5840: $? = 1 configure: failed program was: #line 5811 configure #include confdefs.h ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [ADMIN] Call to undefined function: pg_connect()
shreedhar wrote: Hi All, I Updated my Postgresql Server from 7.2.4 which came along with RH7.3 to 7.3.2. After updating I could not able to connect database through PHP. So I recompiled PHP4.1.2 (same version which have earlier in my system), even then I am not able to connect through PHP. Is there any other package has to be installed for this connection. Please some body help me out regarding. I am breaking my head since two days. Thanks for any idea. Regards, Sreedhar Bhaskararaju ---(end of broadcast)--- TIP 8: explain analyze is your friend the only time I have received that error message is when /usr/lib/php4/pgsql.so was not installed on rh 9.0 installation of server + kde + gnome. seems they didn't install php-pgsql. to verify that your new php actualy has postgresql configured create test file test.php containing ?php phpinfo(); ? run php -f test.php test.html view test.html with browser and verify that pgsql.so made it into the /usr/lib/php4 directory hope this helps, I know my gogle search didn't, but the other rh 9.0 install did work and the only thing i did different was to check install every thing hope this helps ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [ADMIN] Postgresql slow on XEON 2.4ghz/1gb ram
The Hermit Hacker wrote: On Wed, 6 Aug 2003, Wilson A. Galafassi Jr. wrote: do you now how to turn off the xeon Hyperthreading ?? Not sure ... in Unix, you just dont' enable it ... at least under FreeBSD (its disabled by default) ... check in your BIOS though, I *thought* I heard mention that there is a setting in there to disable it, but have never looked :( I thought you could only enable/disable hyperthreading in the bios. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] Stuck Spinlock Error Message
Ludwig Isaac Lim [EMAIL PROTECTED] writes: I'm guessing that the spinlock error occurs after there are around hundreds (or thousands) of queued postmaster processes. Thousands? How large is your max_connections parameter, anyway (and do you really have big enough iron to support it)? The stuck spinlock error implies that some work that should have taken a fraction of a microsecond (namely the time to check and update the internal state of an LWLock structure) took upwards of a minute. Since the process holding the spinlock could lose the CPU, it's certainly physically possible for the actual duration of holding the spinlock to be much more than a microsecond. But the odds of losing the CPU while holding the spinlock are not large, since it's held for just a small number of instructions. And to get an actual stuck spinlock failure would imply that the holding process didn't get scheduled again for more than a minute (while some other process that wanted the spinlock *did* get scheduled again --- repeatedly). I suppose this is possible if your machine is sufficiently badly overloaded. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] Stuck Spinlock Error Message
Hi: --- Tom Lane [EMAIL PROTECTED] wrote: What can cause a stuck spinlock? In theory, that shouldn't ever happen. Can you reproduce it? regards, tom lane I could not reproduce it, but I'll describe how error happen. I have a program that read a file large file which 20,000 records and spawn a process that execute a PLPGSQL stored function based on the content of the file. The following is a table of the SQL statement generated: process 1 SELECT f1(120, 123.3); process 2 SELECT f1(120, 53.3); process 3 SELECT f1(120, 31.3); .. .. process n SELECT f1(120,2.3); the function f1 is basically defined as CREATE OR REPLACE FUNCTION f1(integer, float8) RETURN INTEGER AS' DECLARE -- some variable declaration BEGIN -- Lock the table based on the first parameter -- of the stored function (use record level lock) SELECT * FROM t1 WHERE field1 = $1 FOR UPDATE; --a batch of SQL statements here -- END;' LANGUAGE 'plpgsql'; As you noticed the the first parameter of the called function is the same (Due to bug on our program). Since it performs a record level lock on the record, the processes will queue (i.e. will execute if only a process relinquish its lock). I'm guessing that the there was just to many postmaster process trying to concurrently trying to access the same record being lock by a record-lock. When I execute the top command in linux there are a lot of postmaster process in the process list Is the spinlock error possible given that scenario? Is this error related to the following error messages: fatal 2: cannot write block 3 of 16556/148333 blind : too many open files in sysytem. Note : I was able to correct the above error messages by increasing the file-max parameter in the sysctl.conf. I'm guessing that the spinlock error occurs after there are around hundreds (or thousands) of queued postmaster processes. best regards, ludwig __ Do you Yahoo!? The New Yahoo! Search - Faster. Easier. Bingo. http://search.yahoo.com ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [ADMIN] Why table has drop, but the foreign key still there?
Raymond Chui [EMAIL PROTECTED] writes: drop table whitepage; delete from state where state_code = 'GU'; ERROR: Relation whitepage does not exist Why I got this error message??!! I could not duplicate this error. What PG version are you running? regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] string function -- replace needs a clear doc
On Mon, 11 Aug 2003, Jie Liang wrote: 1. No doc said % should escape by %%, and select replace('whatever%20sites','%20','%%20') won't work also. 2. v7.3.2 haven't resolved this problem yet, I am not sure the later version. I seem to get the expected results on my 7.3.4 system. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[ADMIN] Conditional row grained replication with DBMirror
I made a modification on DBMirror.pl, an addition in the slavedatabase.conf file, and added another replication table specialtables. The goal was to have a way of controlling when a row of table (if the table is present in specialtables), will be mirrored to the remote slave. Lets assume, we have a bank's central IT DEPT master DB and some branches abroad. Rows in some tables (e.g. personel, branches, statistics on transactions, etc...) have no reason to be replicated to all sites, but only those sites that these rows refer to: E.G. we have test=# \d personel Table public.personel Column | Type | Modifiers ---++--- id | integer | not null name | character varying(20) | branchid | integer | test=# or test=# \d branch Table public.personel Column | Type | Modifiers ---++--- id | integer | not null country | character varying(20) | address | text | test=# And we require that insert, updates on table branch with id = 1 go only to remote site (the ip of) branch 1, and that inserts,updates on personel with branchid = 2, in the same fashion, go only to (the ip of) branch 2. I retain the table specialtables test=# \d specialtables Table public.specialtables Column | Type | Modifiers ---++--- tblname | character varying(100) | not null siteidkeyname | character varying(20) | Indexes: specialtables_pkey primary key btree (tblname) test=# in which for each conditional table, the corresponding column name (the value of siteidkeyname) is given which indicates against which column in the PendingData.Data column is to be examined for equality against the site id. The site id for each slave is defined in an additional field in slaveInfo struct in slavedatabase.conf. Please tell me if i should send the patches to DBMirror.pl, MirrorSetup.sql, and slavedatabase.conf. -- == Achilleus Mantzios S/W Engineer IT dept Dynacom Tankers Mngmt Nikis 4, Glyfada Athens 16610 Greece tel:+30-210-8981112 fax:+30-210-8981877 email: achill at matrix dot gatewaynet dot com mantzios at softlab dot ece dot ntua dot gr ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] string function -- replace needs a clear doc
Jie Liang [EMAIL PROTECTED] writes: 1. No doc said % should escape by %%, and select replace('whatever%20sites','%20','%%20') won't work also. 2. v7.3.2 haven't resolved this problem yet, I am not sure the later version. In 7.3.3 release notes: * Fix misbehavior of replace() on strings containing '%' regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[ADMIN] string function -- replace needs a clear doc
urldb=# select replace('whateveritis','a','A'); replace -- whAteveritis (1 row) urldb=# select replace('whateveritis','e','E'); replace -- whatEvEritis (1 row) urldb=# select replace('whatever%20itis','e','E'); replace - whatEvEr 1tis (1 row) urldb=# select replace('whatever%%20itis','e','E'); replace - whatEvEr%20itis (1 row) urldb=# select replace('whatever%%20sitis','e','E'); replace -- whatEvEr%20sitis (1 row) urldb=# select replace('whatever%20sitis','e','E'); server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. !# psql urldb Welcome to psql 7.3.2, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help on internal slash commands \g or terminate with semicolon to execute query \q to quit urldb=# select replace('whatever%20sitis','si','SI'); replace --- whateverSItis (1 row) 1. No doc said % should escape by %%, and select replace('whatever%20sites','%20','%%20') won't work also. 2. v7.3.2 haven't resolved this problem yet, I am not sure the later version. Jie Liang ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[ADMIN] Why table has drop, but the foreign key still there?
Here are the simple things I did create table state ( state_code char(2) not null, state varchar(15) not null, primary key (state_code) ); create table whitepage ( user_id char(8) not null, email varchar(50), telephone char(16) not null, contact_namevarchar(30) not null, cityvarchar(20), state_code char(2), primary key (user_id), foreign key (state_code) references state (state_code) ); insert into state (state_code,state) values ('GU','Guam'); drop table whitepage; delete from state where state_code = 'GU'; ERROR: Relation whitepage does not exist Why I got this error message??!! The whitepage table already drop. When the table drop, will the foreign key constraint also drop?? Now, order for me delete a row from state table, I HAVE TO re-create whitepage table. That's silly!!! What shall I do? How do I delete a row in state without re-create the whitepage table??? Thank you very much in advance! ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] Why table has drop, but the foreign key still there?
insert into state (state_code,state) values ('GU','Guam'); drop table whitepage; delete from state where state_code = 'GU'; ERROR: Relation whitepage does not exist Old version of PostgreSQL? Effort went into cleaning up inter-object dependencies in 7.3. I don't recall having that particular issue in versions prior either. That said, you can remove the triggers that are on table state beginning with the characters RI. psql -d state will tell you what they are. signature.asc Description: This is a digitally signed message part
Re: [ADMIN] Why table has drop, but the foreign key still there?
On Wed, 13 Aug 2003, Raymond Chui wrote: Here are the simple things I did create table state ( state_code char(2) not null, state varchar(15) not null, primary key (state_code) ); create table whitepage ( user_id char(8) not null, email varchar(50), telephone char(16) not null, contact_namevarchar(30) not null, cityvarchar(20), state_code char(2), primary key (user_id), foreign key (state_code) references state (state_code) ); insert into state (state_code,state) values ('GU','Guam'); drop table whitepage; delete from state where state_code = 'GU'; ERROR: Relation whitepage does not exist What version are you using? I can't seem to replicate this given the above on 7.2, 7.3 or 7.4. If these tables were preexisting and had gone through a dump cycle from 7.0 or 7.1, there was a bug in pg_dump for those versions that would lose the connection between the triggers and the other table of the constraint. In any case, you'll need to find and drop the two orphaned triggers on state (see techdocs for information on how to find them). ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] [GENERAL] Fatal error: Call to undefined function: pg_connect()
Hi did you configure php for PostgreSQL support ./configure --with-pgsql Regards Conni ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] [ADMIN] Call to undefined function: pg_connect()
-rw-r--r--1 root root99586 Aug 6 20:37 libpq.a lrwxrwxrwx1 root root 12 Aug 6 20:37 libpq.so - libpq.so.3.0 lrwxrwxrwx1 root root 12 Aug 6 20:37 libpq.so.3 - libpq.so.3.0 -rwxr-xr-x1 root root92088 Aug 6 20:37 libpq.so.3.0 I am very sorry for my last mail. The above are the libpq.so available in my RH7.3 + PHP 4.1.2+ Postgre 7.3.2. Is this libpq.so is right combination. Sreedhar - Original Message - From: Gregory S. Williamson [EMAIL PROTECTED] To: shreedhar [EMAIL PROTECTED]; Joe Conway [EMAIL PROTECTED] Cc: Postgre Admin [EMAIL PROTECTED]; Postgre General [EMAIL PROTECTED] Sent: Thursday, August 07, 2003 1:03 PM Subject: RE: [GENERAL] [ADMIN] Call to undefined function: pg_connect() I don't have a clue about which is which, but I bet providing some dates (ls -l) would help figuring out which is the most recent vs. oldest) ... might provide helpful information to people who know more about compiling this beast. Greg Williamson -Original Message- From: shreedhar [mailto:[EMAIL PROTECTED] Sent: Thu 8/7/2003 12:13 AM To: Joe Conway Cc: Postgre Admin; Postgre General Subject: Re: [GENERAL] [ADMIN] Call to undefined function: pg_connect() /usr/lib/libpq.so.2 /usr/lib/libpq.so.2.0 /usr/lib/libpq.so.2.2 /usr/lib/libpq.so I have the above versions of libpq.so in my system. Which I have to use. Suppose it might have to chage any config files where I should change. Thanks alot, With best regards, Sreedhar - Original Message - From: Joe Conway [EMAIL PROTECTED] To: shreedhar [EMAIL PROTECTED] Cc: Postgre Admin [EMAIL PROTECTED]; Postgre General [EMAIL PROTECTED] Sent: Thursday, August 07, 2003 10:24 AM Subject: Re: [GENERAL] [ADMIN] Call to undefined function: pg_connect() shreedhar wrote: Hi All, I Updated my Postgresql Server from 7.2.4 which came along with RH7.3 to 7.3.2. After updating I could not able to connect database through PHP. So I recompiled PHP4.1.2 (same version which have earlier in my system), even then I am not able to connect through PHP. Is there any other package has to be installed for this connection. If I remember correctly, this is because libpq.so changed from libpq.so.2 to libpq.so.3. Is there an old copy of libpq.so.2 hanging around that is getting linked with instead of libpq.so.3? Did you re-run configure and do a `make clean` on PHP before rebuilding it? If you are really desperate, try creating a libpq.so.2 symlink from libpq.so.3. HTH, Joe ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings