Re: [GENERAL] importing db as text files
On Wed, Aug 13, 2003 at 14:14:20 -0700, expect [EMAIL PROTECTED] wrote: What's the big deal with importing text files? I have a 70 MB file to import and it's been one problem after another. I used the copy command and it appears that it's just not possible. I finally massaged the file into a .sql file and That is unlikely, but without more details it is hard to say what you need to do. ran that using \i db.sql but that failed too because I overlooked ' in names like D'Adario. The other problem I encountered was that a numeric field had to have data in it, pg would not default to the default value. So instead of You can use the keyword default in insert statements to get a default value. massaging all the data again I decided to change the data type for that column. This is my first experience with postgresql and I'm wondering if I should expect to encounter similar pain as I go further into this? So far it's been very painful trying to do what I thought would be easy and what I think should be easy. The impression I get is that you expect postgres to make a best guess when presented with ambiguous data. That is a very dangerous thing to do. I would much prefer ambiguous data be rejected so that I can make sure what I think the value is, is the same as what the database thinks the value is. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Join faster than single table query
Hi Martijn: Thanks for your answer, I really missed something ;-) Kind regards, Ruben. Martijn van Oosterhout wrote: On Mon, Aug 11, 2003 at 01:48:21PM +0200, ruben wrote: Hi: I must have missed something, but how is it possible that a join on tables A and B is faster (a lot faster) than a query to one of the tables with the same conditions? The problem seems to be with the query plan, in the case os a query to table_a only, the planner executes a Seq Scan, in the case of a join, an Index Scan. table_a has about 4M records, so the difference is quite noticeable. explain select * from table_a where field_1=1 and field_2='20030808' and field_3='963782342'; NOTICE: QUERY PLAN: Seq Scan on table_a (cost=0.00..373661.73 rows=12 width=227) EXPLAIN Let me guess, field_1 is not an int4 and since you didn't quote the constant 1, it can't use the index. The second query has matching types, so can you the index. Hope this helps, ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Vacuuming unicode database
My Postgres databases used to have default (SQL_ASCII) encoding. I could store any 8-bit character in it regardless of actual charset, because all clients also used default encoding and no charset conversion was done. Now we started a new project with Qt3 and it's Postgres driver defaults to UNICODE client encoding. This time charset conversion comes to play and messes up all 8-bit characters. This forces me to create all databases with correct charsets. So I recreated my database with UNICODE encoding and imported all data in it, paying attention to client encoding. I didn't re-initdb whole cluster, only recreated problematic database. Everything seems to work fine, only I can't vacuum the database: epos=# vacuum verbose analyze; INFO: --Relation pg_catalog.pg_description-- INFO: Pages 18: Changed 0, Empty 0; Tup 1895: Vac 0, Keep 0, UnUsed 5. Total CPU 0.00s/0.00u sec elapsed 0.00 sec. INFO: --Relation pg_toast.pg_toast_16416-- INFO: Pages 0: Changed 0, Empty 0; Tup 0: Vac 0, Keep 0, UnUsed 0. Total CPU 0.00s/0.00u sec elapsed 0.00 sec. INFO: Analyzing pg_catalog.pg_description INFO: --Relation pg_catalog.pg_group-- INFO: Pages 1: Changed 0, Empty 0; Tup 31: Vac 0, Keep 0, UnUsed 31. Total CPU 0.00s/0.00u sec elapsed 0.00 sec. INFO: --Relation pg_toast.pg_toast_1261-- INFO: Pages 0: Changed 0, Empty 0; Tup 0: Vac 0, Keep 0, UnUsed 0. Total CPU 0.00s/0.00u sec elapsed 0.00 sec. INFO: Analyzing pg_catalog.pg_group ERROR: Invalid UNICODE character sequence found (0xdc6b) Table pg_group is giving errors, because I have group name with 8-bit characters in it. As I understand, groups are common for all databases and pg_group is created during initdb, so it should be considered having SQL_ASCII charset, not UNICODE. Seems like a bug to me? What would you suggest in this case: 1. Re-initdb with UNICODE encoding and recreate all databases. Basically all databases should have the same encoding. 2. Use some single-byte encoding for database, instead of UNICODE. Vacuuming wouldn't complain any more, but I have some doubts that CREATE GROUP Name with 8-bit characters behaves differently depending on encoding of the active database. Tambet ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Sorting Problem
Perhaps the only way to get around the cache problem is to use an ISO-8859-x 8bit character set, but to have per table, or per column encoding attributes. And of course, ways to access what those are, in the Postgres API. Good for speed, but not for easy storing of multiple language/encodings per column/table. Gianni Mariani wrote: Dennis Gearon wrote: I agree with all of that except for one caveat: all my reading, and just general off the cuff thinking, says that processing variable width characters SIGNIFICANTLY slows an application. It seems better to PROCESS fixed width characters (1,2,4 byte), and TRANSMIT variable width characters (avoiding the null problem.) I can and have solved that problem. If you can assume utf-8 encoding then there are available to you a bunch o tricks that takes this problem away. The other problem with memory (and hence cache) utilization of a wide char only solution it far more significant. Cache effects are the primary killer for performance in an app like a database. Anyhow, before making any decisions one should do a bunch of analysis. Cheers G ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] v7.4 Beta 1 Bundle Available for Testing ...
On 6 Aug 2003, Robert Treat wrote: On Tue, 2003-08-05 at 20:18, Dann Corbit wrote: For me, I can only find these directories: ftp://ftp8.us.postgresql.org/pub/pgsql/source/v7.4/ ftp://ftp8.us.postgresql.org/pub/postgresql/source/v7.4/ And both of them are empty. Might be some lag in that mirror synching; try ftp://ftp3.us.postgresql.org/pub/postgresql/source/v7.4/ I just checked, and out of the 6 listed US mirrors, all but ftp8 and ftp15 are updated. ftp15 seems more out of sync than ftp8, as it doesn't even have 7.3.4 yet. Maybe it's sync script only runs once a week or so. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[GENERAL] Why the duplicate messages to pgsql-general?
Why does everyone send to the list and the author? Can someone make this stop? Or is this the preferred method of response on this list? The pgsql-novice doesn't seem to suffer this annoyance. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Postgresql slow on XEON 2.4ghz/1gb ram
Hello. I have this problem:i'm runningthe postgre 7.3on awindows 2000 serverwith P3 1GHZ DUAL/1gb ramwithgood performance.For bestperformancei have change the server for a XEON 2.4/1gb ram andformy suprisethe performance decrease 80%.anybody have a similar experience?does exist anyspecial configuration to postgre running on a Xeonprocessor? Any have any idea to help-me? Excuse-me my bad english. Very Thanks Wilson icq 77032308 msn [EMAIL PROTECTED]
Re: [GENERAL] ext3 block size
It was RH8 where I saw the issues. It was so bad that I had one server freeze 20 minutes after reboot. And it wasn't a hardware issue; that server is now running with several months of uptime under heavy load. Jonathan Bartlett wrote: What distribution are you running? I and a lot of other people use RH8 on multiprocesor boxes, and it works great! Jon On Wed, 6 Aug 2003, DeJuan Jackson wrote: Don't know the answer to your question, but I thought I would just pipe in and say that if this is an SMP (has multiple processors) Linux box you don't want to use ext3!!! I used ext3 on my SMP box here at work and now I can't have children (I guess it would help if I got a wife first)!! But in all seriousness SMP + ext3 = BAD(unpredictable crashes depending loosely related to system load). Wilson A. Galafassi Jr. wrote: hello. my database size is 5GB. what is the block size recommend? thanks wilson ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Could not receive data from server
7.4 has a --enable-thread-safety which compiles libpq and ecpg for threading, with that flag on operating systems that need it. --- Amin Abdulghani wrote: Compiling postgres client with -D_REENTRANT seems to remove this problem. It seems to have been a thread issue with webware (and more specfically errno associated the pqReadData). On a side note, I noticed that the configure.in for 7.2.1 doesnt seem to have any option for compiling with the above flag. Is it available for the next releases 7.3/7.4? Thanks.. Amin On Tue, 05 Aug 2003 12:10:31 -0400 Amin Abdulghani [EMAIL PROTECTED] wrote: Hi, Running Postgres 7.2.1 with python/psycopg (python interface for postgres using libpq)/webware application server on Solaris, I get a postgres error could not receive data from server Error 0 when I try to retrieve a large text column (16 K) from a table. The query I use is select col from table. However, if I avoid the webware application server, this seems to work ok. I am just wondering what effect webware could have on the postgres client libraries. The postgres server is on a different machine. It doesnt seem to be a memory related issue as web-ware uses only 5-6 MB on a machine having 256 MB. Thanks.. Amin ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] 7.3.3 behaving differently on OS X 10.2.6 and FreeBSD
DeJuan Jackson wrote: I have a suspicion that the version might be different. I have the same symptom here on two different RH 7.3 boxes one running 7.3.2 and the other running 7.3.3 It would appear 7.3.2 is more strict about the naming of the GROUP BY fields. They really are the same versions. For the OS X machine I installed from source downloaded from the postgresql ftp site. FreeBSD was installed from the port but my ports tree is up to date. On freebsd: 501 $ pg_ctl --version pg_ctl (PostgreSQL) 7.3.3 On OS X: 516 $ pg_ctl --version pg_ctl (PostgreSQL) 7.3.3 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] postmaster(s) have high load average
Oops! [EMAIL PROTECTED] (Chris Webster) was seen spray-painting on a wall: Martijn van Oosterhout wrote: Have you run VACUUM and/or VACUUM FULL and/or ANALYZE recently? a) yes. I have it run analyze every 30 minutes or 1600 record additions. Records are never updated or deleted so I assume I don't need vacuum. You only really need to run analyze when the statistical characteristics of the data changes; as the database grows, that is fairly likely to stabilize somewhat so that you can ANALYZE less frequently over time... Have you verified that nothing has gotten touched? Run a VACUUM VERBOSE and see what it does... Note that if you ever get cases where records are added but rolled back due to some later part of a transaction failing, that too will lead to dead tuples... b) It does it even at start up when there are fewer than 100 records in the database. c) Would this even matter for clients that only connect but NEVER make any requests from the database? Run VACUUM VERBOSE on it; you'll no doubt see that some internal tables such as pg_activity, pg_statistic, and such have a lot of dead tuples. Establishing a connection leads to _some_ DB activity, and probably a dead tuple or two; every time you ANALYZE, you create a bunch of dead tuples since old statistics are killed off. -- If this was helpful, http://svcs.affero.net/rm.php?r=cbbrowne rate me http://www3.sympatico.ca/cbbrowne/sap.html Rules of Optimization: Rule 1: Don't do it. Rule 2 (for experts only): Don't do it yet. -- M.A. Jackson ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] strategies for optimising common lookups etc
Hi all, apologies if this is a dumb question - not really sure where to look for this kind of stuff... I have a frequently called stored procedure that needs to access some configuration information to determine how it should behave. Currently I have the configuration information in its own table, and the sp does a lookup select item from config where pk = I am wondering whether there is any way to cache the result of a lookup across instances of the stored procedure call? The configuration value will rarely change, and as the procedure is called often it seems a bit of a waste of resources looking it up each time the procedure is run. Is this the sort of thing that internal variables would handle (by that I mean using set and show)? TIA Martin -- Martin Hart Arnclan Limited 53 Union Street Dunstable, Beds LU6 1EX http://www.arnclanit.com ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] crashing Xeon?
Hi list, I have been using PostgreSQL for a GIS database on different Linux (mostly SuSE 8.1) boxes, and so far everything fine. Now I tried to set up a bigger database server (HP ProLiant DL 380-G3, 2x Intel Pentium4-Xeon, 2.8 GHz 4 GB memory and a RAID 5 system with ca. 500 GB diskspace) under SuSE 8.1 When compiling PostgreSQL 7.3.4, (GNU Make version 3.79.1) everything went fine. Then I wanted to set up several database instances and fill them from SQL dumps (1 ...1.5 GB each) and lots of indices (btree and rtree). I monitored the filling process. When reading data, all was fine. When setting up the indices after reading the data, I managed to crash the system (?!) several times (not regularly, but mostly related with heavy load on the box (e.g. zipping large files while filling the databases). Normally, this should result only in slowing down the machine (may be VERY slow) but I see the postmaster processes lose CPU percentage while the system load (xload) even increases until everything (not only the database) is blocked. When doing the same things for our older Athlon single-processor boxes, everything went slowly, but fine a) The dumps come from 7.1.3 databases. This should not be any problem, I think. b) Normally, I would expect only the application (i.e. the database to crash, but not the system. c) Do I run into some multiprocessor problems? Missed some compiler option providing thread-safe processing? Is there anybody who can give me some hint on fixing the problem? Any help will be greatly appreciated. Regards, Hubert -- --- Dr.-Ing. Hubert Fröhlich Bezirksfinanzdirektion München Alexandrastr. 3, D-80538 München, GERMANY Tel. :+49 (0)89 / 2190 - 2980 Fax :+49 (0)89 / 2190 - 2459 [EMAIL PROTECTED] ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] 7.4Beta1 failed to create socket: Address family not supported byprotocol
I'm receiving the following error during startup: Aug 10 14:11:27 thunder postgres[18613]: [1-1] LOG: failed to create socket: Address family not supported by protocol Aug 10 14:11:27 thunder postgres[18619]: [2-1] LOG: database system was shut down at 2003-08-10 14:11:11 MDT Aug 10 14:11:27 thunder postgres[18619]: [3-1] LOG: checkpoint record is at 4/E28389B4 Aug 10 14:11:27 thunder postgres[18619]: [4-1] LOG: redo record is at 4/E28389B4; undo record is at 0/0; shutdown TRUE Aug 10 14:11:27 thunder postgres[18619]: [5-1] LOG: next transaction id: 80139; next oid: 35014046 Aug 10 14:11:27 thunder postgres[18619]: [6-1] LOG: database system is ready And yet everything appears to be working fine. Something I've done wrong in the build/configure? tassiv=# select version(); version --- PostgreSQL 7.4beta1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.2 (Mandrake Linux 9.1 3.2.2-3mdk) Cheers, Rob -- 14:51:35 up 9 days, 7:37, 4 users, load average: 1.13, 1.18, 1.03 pgp0.pgp Description: PGP signature
Re: [GENERAL] Update of foreign key values
On 11 Aug 2003, Ron Johnson wrote: On Mon, 2003-08-11 at 18:40, Roderick A. Anderson wrote: On 11 Aug 2003, Ron Johnson wrote: Maybe this will do it: http://www.postgresql.org/docs/7.3/static/sql-set-constraints.html Saw this but my take was it required the original constraint to be created with the deferred(able) option. But hey, I'll give it a try since all my Where do you see that? or INITIALLY IMMEDIATE NOT DEFERRABLE. The third class is not affected by the SET CONSTRAINTS command.??? Yes. Either DEFERRABLE or INITIALLY DEFERRED must be given in order for set constraints to be meaningful. This might be another good place to consider a little clarification (or maybe a doc note in the interactive docs) ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] v7.4 Beta 1 Bundle Available for Testing ...
On Wed, Aug 06, 2003 at 13:40:29 -0300, The Hermit Hacker [EMAIL PROTECTED] wrote: just looking at the rsyncd.conf file on svr1 itself, is the following a valid address: hosts allow = \ 114.73.139.66.in-addr.arpa, \ You could have an A record with a domain name using the typical format used for PTR records. However in this case there is no A record for 114.73.139.66.in-addr.arpa. The ptr record points to doodah.gremlins.biz. There is an a record for doodah.gremlins.biz which lists 66.139.73.114, which matches the original ptr record. ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Sorting Problem
On Wed, 13 Aug 2003, Dennis Gearon wrote: Got a link to that section of the standard, or better yet, to a 'interpreted' version of the standard? :-) The standard draft yes, an interpreted version, unfortunately not (unless Date's book covers it and I can find where my copy is. Here are some of the highlights k) form-of-use: A convention (or encoding) for representing characters (in character strings). Some forms-of-use are fixed-length codings and others are variable-length codings. l) form-of-use conversion: A method of converting character strings from one form-of-use to another form-of-use. A character set is described by a character set descriptor. A character set descriptor includes: - the name of the character set or character repertoire, - if the character set is a character repertoire, then the name of the form-of-use, - an indication of what characters are in the character set, and - the name of the default collation of the character set. For every character set, there is at least one collation. A collation is described by a collation descriptor. A collation descriptor includes: - the name of the collation, - the name of the character set on which the collation operates, - whether the collation has the NO PAD or the PAD SPACE attribute, and - an indication of how the collation is performed. --- The character data types and literals can include a character set definition. Character type columns can include a collation. There's a COLLATE blah clause that looks like it can be used in expressions as well. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] multiple insert into's (may be NEWBIE question)
On 5 Aug 2003, Ron Johnson wrote: On Tue, 2003-08-05 at 14:42, Stephan Szabo wrote: On Tue, 5 Aug 2003, scott.marlowe wrote: On Tue, 5 Aug 2003, Williams, Travis L, NPONS wrote: I have a table (lets say a,text b,text) and I want to insert the data jim,jimmy and trav,travis can I do this with 1 insert into statement instead of 2? Not with the current implementation of insert. There's been some discussion of adding the multiple tuple insert seen in other databases, but I don't think anyone's actually done it or even agreed on exactly how to do it. I'm not sure if SQL 3 covers this, it seems like it hints at it, but I can't read that stuff all that well most of the time. I don't think there's a way right now though, without using some form of copy. Well, you can do it with insert ... select and union. insert into tab select 'jim', 'jimmy' union select 'trav', 'travis'; But the bottom line question is why do it it in the 1st place?. Multiple INSERT commands works like a peach, as does COPY from stdin (thanks again, Jason). Some constraints might make such things meaningful when compared to multiple inserts (for example, immediate checked self-referential foreign keys where you might want to insert a row and the row it depends on in a single statement). Copy should work, but that's fairly different than inserts (if only because inserts are likely to work on other systems). ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] v7.4 Beta 1 Bundle Available for Testing ...
On Tue, 2003-08-05 at 20:18, Dann Corbit wrote: -Original Message- From: The Hermit Hacker [mailto:[EMAIL PROTECTED] Sent: Tuesday, August 05, 2003 5:01 PM To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Subject: [GENERAL] v7.4 Beta 1 Bundle Available for Testing ... Just a quick note to everyone that v7.4 is now official in Beta Freeze, with the first Bundle available for download, testing and bug reports ... The Bundle is available on all FTP mirrors (in both .gz and .bz2 format) under: /pub/source/v7.4 We encourage everyone that is able to download, test and report any bugs on this release to do so, so that we can ensure that this release is as strong as all our past releases. All bug reports should be addressed to: [EMAIL PROTECTED] For me, I can only find these directories: ftp://ftp8.us.postgresql.org/pub/pgsql/source/v7.4/ ftp://ftp8.us.postgresql.org/pub/postgresql/source/v7.4/ And both of them are empty. Might be some lag in that mirror synching; try ftp://ftp3.us.postgresql.org/pub/postgresql/source/v7.4/ When the drop eventually does become available will it contain any Win32 stuff (raw as it is)? depends on what you mean by any. I believe that the standard tarball will compile under windows, but to what extent it will actually run I couldn't say. This release is not intended for native use on windows (that had to be pushed back to 7.5), though it will certainly run via cygwin. Robert Treat -- Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Sorting Problem
Ok, thanks for all the discussion followed, vey educational :-)) But nobody really followed up my question :-( For example, you have a table that is displayed in the browser. You want to let the user do sorting on one or multible columns, including those which contain localized strings. If the db supports sorting for multi-linguages, you can retreive all the rows and do ORDER BY on the columns directly. But if db doesn't support that, in Java, you will have to retrieve all the rows first, sort the objects on multiple columnes listed in the ORDER BY in java. It is much slower than the previous one. My problem is that if initdb is done with en_US, if a customer in Japan request the page with that sorted table, the text is not sorted in a Japanese one. Then I have a problem. Any suggestions on that ?? Dennis Gearon wrote: I agree, mostly. In the case of a database, I would bet that the INTERNAL, IN-APPLICATION processing FAAARRR exceeds that of sending and receiving it. i.e. comparisons, sorts, triggers, indexes, views, functions, logging to tables, ordering by, grouping, etc. except backups, restores, logging to text files (these would be good in UTF8) Bruce Momjian wrote: I think the question is how often are you passing data around/storing it _in_ your application and how often are you processing it. --- Dennis Gearon wrote: I agree with all of that except for one caveat: all my reading, and just general off the cuff thinking, says that processing variable width characters SIGNIFICANTLY slows an application. It seems better to PROCESS fixed width characters (1,2,4 byte), and TRANSMIT variable width characters (avoiding the null problem.) Gianni Mariani wrote: Dennis Gearon wrote: Got a link to that section of the standard, or better yet, to a 'interpreted' version of the standard? :-) Stephan Szabo wrote: On Wed, 13 Aug 2003, Dennis Gearon wrote: Dennis Bj?rklund wrote: In the future we need indexes that depend on the locale (and a lot of other changes). I agree. I've been looking at the web on this subject a lot lately. I am **NOT** a microslop fan, but SQL-SERVER even lets a user define a language(maybe encoding) down to the column level! I've been reading on GNU-C and on languages, encoding, and localization. http://pauillac.inria.fr/~lang/hotlist/free/licence/fsf96/drepper/paper-1.html http://h21007.www2.hp.com/dspp/tech/tech_TechSingleTipDetailPage_IDX/1,2366,1222,00.html There are three basic approaches to doing different langauges in computerized text: A/ various adaptations of the 8 bit character set, I.E. the ISO-8859-x series. B/ wide characters This should be how Postgress stores data internally. C/ Multibyte characters This is how Postgress should default to sending data OUT of the application, i.e. to the display or the web, or other system applications SQL has a system for defining character set specifications, collations and such (per column/literal in some cases). We should probably look at it before making decisions on how to do things. I thought UNIX (SCOTM) systems also had a way of being able to define collation order. see: ftp://dkuug.dk/i18n/WG15-collection/locales for a collection of all ISO standardized locales (the WG15 ISO work group's stuff). Do a man localedef on most Linuxen or UNIXen. As for wide characters vs multibyte, there is no clear winner. The right answer DEPENDS on the situation. Wide characters on some platforms are 16 bit which means that when you do Unicode you'll still have problems with surrogate pairs (meaning that it's still multi (wide) char) so you still have all the problems of multi-byte encodings. You could decide to process everything in a PG specific 4 byte wide char and do all text in Unicode but the overhead in processing 4 times the data is quite significant. The other option is to store all data in utf-8 and have all text code become utf-8 aware. I have found in practice that the utf-8 option is significantly easier to implement, 100% Unicode compliant and the best performer (because of reduced memory requirements). The Posix API's for locales are not very good for modern day programs, I'm not sure where the mbr* and the wcr* apis are in the standardization process but if these are not well supported, you're on your own and will need to implement similar functionality from scratch and for that matter, the collation functions all operate on a current locate which is really difficult to work with on multi-locale applications. ---(end of broadcast)--- TIP 8: explain analyze is your friend ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Moving from MySQL
On 09/08/2003 01:41 Simon Windsor wrote: Hi I am sorry for mailing this list directly, but I am planning to migrate a web application from MySQL to Postgres. I am aware of a number of applications to help in this process, ie my2pg, etc. The biggest stumbling block I face is replication. What facilities exist within Postgres to replicate data from one instance to another? There are several replication mechanisms available for PostgreSQL. The two shipped with it are dbmirror and rserv. I believe that rserv will shortly be replaced by erserv, which is based on rsev and uses Java instead of perl. Which version of Postgres would people to recommend? The latest production release is 7.3.4. I'd recoomend using that. HTH -- Paul Thomas +--+-+ | Thomas Micro Systems Limited | Software Solutions for the Smaller Business | | Computer Consultants | http://www.thomas-micro-systems-ltd.co.uk | +--+-+ ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] problem with timeofday() function in cvs PostgreSQL
Pavel Stehule [EMAIL PROTECTED] writes: You probably want now() instead of that function. I can't use now() or others because I need actual time in transaction. Just cast the result of timeofday() to timestamp (with time zone, likely). regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] Hardware Performance Tuning
The doc PostgreSQL Hardware Performance Tuning by Bruce Momjian, 16th January 2003 says: As a start for tuning, use 25% of RAM for cache size, and 2-4% for sort size., and The default POSTGRESQL conguration allocates 64 shared buffers. Each buffer is 8 kilobytes.. Have I understood it correctly that shared_buffers should be set to 25% of RAM divided by 8kB, and sort_mem to 2-4% of RAM? (If so, the default values of 64 and 512 respectively, appear relatively small). Thanks, KP ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] multiple insert into's (may be NEWBIE question)
On Tue, 5 Aug 2003, Williams, Travis L, NPONS wrote: I have a table (lets say a,text b,text) and I want to insert the data jim,jimmy and trav,travis can I do this with 1 insert into statement instead of 2? Not with the current implementation of insert. There's been some discussion of adding the multiple tuple insert seen in other databases, but I don't think anyone's actually done it or even agreed on exactly how to do it. I'm not sure if SQL 3 covers this, it seems like it hints at it, but I can't read that stuff all that well most of the time. I don't think there's a way right now though, without using some form of copy. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] pg_dump corrupts database?
seems like that shouldn't happen. Stephen Robert Norris wrote: I've encountered this a few times with 7.2 and 7.3. If I do pg_dump of some large ( 100Mb - the bigger the more likely) database, and it gets interrupted for some reason (e.g. the target disk fills up), the source database become corrupt. I start getting errors like: open of /var/lib/pgsql/data/pg_clog/0323 failed: No such file or directory and I have to drop/restore the table in question. Is this a known problem? Is there some safe way to dump databases that avoids it? Stephen ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Changing DB ownership
Added to TODO, to match ALTER TABLE ... OWNER: o Add ALTER DATABASE ... OWNER TO newowner --- Sander Steffann wrote: Hi, Why would you want to do that? Why not do it an easier way and dump the database and restore it into your new database? There's got to be a lot of stuff to consider when doing something as radical as renaming a database. He is not talking about renaming his database, he is talking about changing the OWNER of the database. UPDATE pg_database SET datdba = 504 WHERE datname='chris'; This is how I change the owner of the database too. It's not that diffucult, but it would be nice if it could be changed using an ALTER statement. I have noticed in the past that the dumps produced by pg_dump are difficult to restore if the datdba you change to has no rights to create databases. I haven't tested this with recent releases though. I suspect that this has already been fixed in pg_dump. Bye, Sander. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] indices and cidr/inet type
On Tue, 05 Aug 2003 15:09:00 -0400 Tom Lane [EMAIL PROTECTED] wrote: Richard Welty [EMAIL PROTECTED] writes: what type of index is recommended when the dominant lookup on a field is 'networkaddress' ipblock I don't think we have a suitable index type at the moment; or more accurately, we don't have a suitable operator class. It would be an interesting exercise to write an inet opclass for either rtree or GiST. I'm not sure which one would be more suitable, but probably you could handle with one or the other. hmmm. where should i go looking for sample code implementing operator classes as an example? by way of context, i do a _lot_ of email work, and my client of the moment is the owner of an ISP who is migrating his user system configurations into postgresql as much as he possibly can. this includes his list of allowed mail relay hosts and locally blacklisted spam sources. while the tables aren't that large right now and the planner probably wouldn't use an index even if one existed, i can definitely see fast operations on CIDR and INET types as being more than a little valuble in cases where things do get large. richard -- Richard Welty [EMAIL PROTECTED] Averill Park Networking 518-573-7592 Java, PHP, PostgreSQL, Unix, Linux, IP Network Engineering, Security ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] connect to remote db
hi, I want to connect to database repl_isar located in host isar from host willy. So I wrote: String url = jdbc:postgresql://isar/repl_isar; conn = DriverManager.getConnection(url,username,password); But it fails by given error both in postmaster running repl_isar and local site willy. java.sql.SQLException: User authentication failed If I tried local database url=repl_willy, the same username and password work well. I did run createuser script and even run SQL create user ... What do I missed? Thanks, Yi Lin ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] remove $ from money field
Should be a simple question. When selecting a field that is of type money, how can I remove the $ when selected? example: $10.00 would return as 10.00 -Robby ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] indices and cidr/inet type
Richard Welty [EMAIL PROTECTED] writes: what type of index is recommended when the dominant lookup on a field is 'networkaddress' ipblock I don't think we have a suitable index type at the moment; or more accurately, we don't have a suitable operator class. It would be an interesting exercise to write an inet opclass for either rtree or GiST. I'm not sure which one would be more suitable, but probably you could handle with one or the other. regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Email DML Errors and Constraint Violations
On Sun, 2003-08-10 at 19:43, Raymond wrote: Utilizing Postgres 7.3.3 and RH80. Any way to email DML errors and constraint violations from Postgres? Need something event driven; polling logs is not a good solution. Would be very effective to monitor 3rd party client incremental development and testing. Sounds like a job for the postmaster. Maybe messages going to the log file could be tee'd off somehow and piped to a mail script? -- +---+ | Ron Johnson, Jr.Home: [EMAIL PROTECTED] | | Jefferson, LA USA| | | | Man, I'm pretty. Hoo Hah! | |Johnny Bravo | +---+ ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[GENERAL] Auditing idea
Hey all! I had an idea to implement a very simple (IMHO) auditing mechanism for an application I'm developing, and thought some people here could give some hints, opinions, etc about it. I was thinking about implementing update and delete triggers for all my tables that I want to be audited, and these triggers would create automatically the auditing message for me. Basically, the update triggers would store the row values in the OLD and NEW rows, and the delete trigger would store the values in the OLD row. What you guys think of this idea? To me seems simple and good enough. /~\ The ASCIIFelipe Schnack ([EMAIL PROTECTED]) \ / Ribbon Campaign Analista de Sistemas X Against HTML Cel.: 51-91287530 / \ Email! Linux Counter #281893 Centro Universitário Ritter dos Reis http://www.ritterdosreis.br [EMAIL PROTECTED] Fone: 51-32303341 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Changing DB ownership
On Tue, 2003-08-05 at 11:11, Jimmie H. Apsey wrote: Why would you want to do that? Why would I want to change ownership of a database? I could think of numerous reasons. Why not do it an easier way and dump the database and restore it into your new database? Because that would mean that the database would be down while I did that. The users to whom I gave ownership to the DB already had permissions on the tables in the DB. This meant no down time. There's got to be a lot of stuff to consider when doing something as radical as renaming a database. I didn't rename any database, nor do anything 'radical'. Please re-read my message and you'll see. This SQL query: UPDATE pg_database SET datdba = 504 WHERE datname='chris'; is obviously simpler than a pg_dump, dropdb, createdb, pg_restore. Cheers, Chris -- Christopher Murtagh Webmaster / Sysadmin Web Communications Group McGill University Montreal, Quebec Canada Tel.: (514) 398-3122 Fax: (514) 398-2017 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Auditing idea
You might want to take a look at this project: http://gborg.postgresql.org/project/tablelog/download/download.php There is another project similar to that one on gborg as well, but I don't recall the name at this time. On Thu, 2003-08-07 at 15:30, Felipe Schnack wrote: Hey all! I had an idea to implement a very simple (IMHO) auditing mechanism for an application I'm developing, and thought some people here could give some hints, opinions, etc about it. I was thinking about implementing update and delete triggers for all my tables that I want to be audited, and these triggers would create automatically the auditing message for me. Basically, the update triggers would store the row values in the OLD and NEW rows, and the delete trigger would store the values in the OLD row. What you guys think of this idea? To me seems simple and good enough. /~\ The ASCIIFelipe Schnack ([EMAIL PROTECTED]) \ / Ribbon Campaign Analista de Sistemas X Against HTML Cel.: 51-91287530 / \ Email! Linux Counter #281893 Centro Universitário Ritter dos Reis http://www.ritterdosreis.br [EMAIL PROTECTED] Fone: 51-32303341 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org -- Best Regards, Mike Benoit ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] High volume use of postGres
I've been charged with investigating postGIS as a replacement for our current Informix runtime (9.3), which uses the Spatial blade. Currently we use two Sun Sparc/Ultra 80 boxes with 3 CPUS dedicated to Informix. We use our database to calculate the image files we have within a given area (ST_Union, Intersect and the like). Originally this company tried Oracle, but, as the Firesign Theater put it, He's no fun, he fell right over! ... Informix works well but charges a hefty amount for each runtime CPU. Our databases are not particularly large -- only about 500,000 rows in the larger ones, but we're running more than a million hits a day, not evenly spread, natch so at peak we need a lot of capacity. Are there documents on performance of postGres on various platforms that I might be able to get some ideas from ? Such issues as multiple CPU vs single CPU, Operating System -- we're leaning towards Linux of some flavor but also have some Sun servers (not as hefty as our current runtime database servers) -- would be of interest. Any suggestions from users that have/are really beating up postGIS/postgres would be welcome ! (for instance, cleaning up a busy database, supporting multiple servers, etc.) Thanks, Greg Williamson DBA GlobeXplorer LLC ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] How to prevent vacuum and reindex from deadlocking.
On Sun, 10 Aug 2003 14:50:10 -0600 Robert Creager [EMAIL PROTECTED] said something like: I'm running 7.4Beta1 with pg_autovacuum. In one of my operations which is executed frequently, a REINDEX is done after a COPY. Well, VACUUM's are being executed by pg_autovacuum, and my app is loosing the deadlock. Is there a way I can prevent this from happening? I can obviously acquire a LOCK before doing the REINDEX (which one?), but it looks like this should be happening anyway by both the VACUUM and REINDEX statements. Opps, if it helps, the log of the deadlock: Aug 10 14:19:36 thunder postgres[18735]: [2-1] ERROR: deadlock detected Aug 10 14:19:36 thunder postgres[18735]: [2-2] DETAIL: Proc 18735 waits for AccessExclusiveLock on relation 18028 of database 17140; blocked by proc 18815. Aug 10 14:19:36 thunder postgres[18735]: [2-3] Proc 18815 waits for AccessExclusiveLock on relation 18101 of database 17140; blocked by proc 18735. -- 14:55:41 up 9 days, 7:41, 4 users, load average: 1.56, 1.27, 1.08 pgp0.pgp Description: PGP signature
Re: [GENERAL] plPHP -- sort of an announcement.. but not commercial
On Fri, 1 Aug 2003, Joshua D. Drake wrote: Hello, As a recent flurry of activity has commenced within Command Prompt we have released upon this rather unround earth, plPHP. Yes it is trigger safe, yes you can write UDF's in PostgreSQL with PHP now. Find it here: http://www.commandprompt.com/entry.lxp?lxpe=260 I'm looking for how to reference old/new rows in a trigger, what's the name of the data structure to use there? Thanks for doing this, by the way to all the folks involved. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Knowing how many records I just inserted
How can I get that back out to bash, if I'm doing psql scripting? An env. variable wouldn't work, since it would go away when psql terminates. Capture the stdout of the psql command and pipe it through cut in order to get only the third word of output. your_shell_variable=`psql -c 'insert into t1 select attrs from t2' | cut -d' ' -f3` hth, cl. TIA -- +-+ | Ron Johnson, Jr.Home: [EMAIL PROTECTED] | | Jefferson, LA USA | | | | I'm not a vegetarian because I love animals, I'm a vegetarian | | because I hate vegetables!| |unknown | +-+ ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] tsearch2 on postgresql 7.3.4
Did you make install? [EMAIL PROTECTED] wrote: I am trying to setup tsearch2 on postgresql 7.3.4 on a Redhat9 system, installed from rpms. There seemed to be some files required for installation of tsearch missing so I downloaded the src bundle too. Tsearch2 then compiled ok but now the command: psql mydb tsearch2.sql fails with a message along the lines of: unable to stat $libdir/tsearch2 no such file I read up on valena.com about what $libdir was, but there's no mention of how to find out what the value of $libdior is. how do i find out where $libdir is for the current install? what files do i need to move into $libdir directory to get tsearch2 up and running? Thanks! -- Teodor Sigaev E-mail: [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] unexpected chunk number
Eric Anderson Vianet SAO [EMAIL PROTECTED] writes: pg_dump: dumping out the contents of table tbdmovimento pg_dump: ERROR: unexpected chunk number 8 (expected 0) for toast value 6935= 693 Hm. Could we see the results of select chunk_seq,length(chunk_data) from tbdmovimento's toast table where chunk_id = 6935693; If you are not trusting the index then it might be good to try this with enable_indexscan turned off and see if you get the same answers. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Trying to create a GiST index in 7.3
Dmitry Tkach [EMAIL PROTECTED] writes: I am trying to create a custom GiST index in 7.3, but getting an error, ... I have done all the setup that was required in 7.2.4: You should not be using the 7.2 methods anymore --- there is a CREATE OPERATOR CLASS, use that instead. (See the contrib gist classes for examples.) testdb=# select * from pg_opclass where opcname = 'gist_index_ops'; -[ RECORD 1 ]+-- opcamid | 783 opcname | gist_index_ops opcnamespace | opcowner | opcintype| 20 opcdefault | t opckeytype | 0 Those NULL fields probably explain your problems ... (the fields are marked NOT NULL, but due to an oversight, the constraint is not enforced against core system catalogs in 7.3 :-() regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Update of foreign key values
On Tue, 12 Aug 2003, Roderick A. Anderson wrote: On Mon, 11 Aug 2003, Stephan Szabo wrote: Yes. Either DEFERRABLE or INITIALLY DEFERRED must be given in order for set constraints to be meaningful. This might be another good place to consider a little clarification (or maybe a doc note in the interactive docs) Phew. I thought I was going bonkers. Then add that I never named the constraints so I was still out'a luck. Well, the constraint still got an autogenerated name, so you probably could have used that, although you might end up affecting more constraints than you had meant. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Sorting Problem
It does if you look at the original email. Maksim must've just transposed a couple letters when he was writing his demo. Jeff Kathy zhu wrote: If it skips -, then RMT-* should come before RM-V*, but they don't, why ?? Maksim Likharev wrote: en_US locale skips? punctuation from sorting index, so in your case RM-791 RM-AV2100 RM-PP401 RM-PP402 RM-PP404 RM-V10 RM-V11 RM-V12 RMT-D10 RMT-D108A RMT-D109A RMT-D116A RMT-V402 == RM791 RMAV2100 RMPP401 RMPP402 RMPP404 RMV10 RMV11 RMV12 RMTD10 RMTD108A RMTD109A RMTD116A RMTV402 -Original Message- From: Kathy zhu [mailto:[EMAIL PROTECTED] Sent: Tuesday, August 12, 2003 3:30 PM To: Tom Lane Cc: Tim Edwards; [EMAIL PROTECTED] Subject: Re: [GENERAL] Sorting Problem Do you mean that soring doesn't work for en_US locale ??? And, does encoding affect sorting at all ?? thanks, kathy Tom Lane wrote: Tim Edwards [EMAIL PROTECTED] writes: When I sort ASC on the varchar I get some strange results. Here a section of data cut after running a sort. It starts with RM- then does RMT- Then goes back for more RM-. Sounds like you're in en_US locale, or at least something other than C locale. Unfortunately this can only be fixed by re-initdb'ing :-( regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] PostGreSQL - Accessing It
On Wed, Aug 13, 2003 at 01:34:00AM +, Mel Roman wrote: I have the same problem as Michael. I've installed postgresql and am running the service, but I can't log in as postgres. I just tried the command line psql -U postgres template1, but I get the message authentication failed for user 'postgres'. The administrator's manual confirms that I first need to connect as this predefined user, but doesn't say what that user's password is. Did You tried su - postgres as root? This should bring You into a shell of the postgres user. HTH, -tb -- Thomas Beutin [EMAIL PROTECTED] Beam me up, Scotty. There is no intelligent live down in Redmond. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] public key functions for postgresql ?
I need to store some sensitive data and I want to use public keys so anyone can encrypt the data but can only be decrupted by certain users. Anyhow, are there any loadable modules that do public key encryption for Postgresql ? I'd like to access these functions in plpgsql. It's probably not that hard, just don't want to re-invent the wheel. I could probably do it from plperl since there are plenty of perl PK tools. Reccomendations ? G ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] OT: Address Fields
On Thu, 31 Jul 2003 11:19:25 +0100, David W Noon wrote: On Thursday 31 Jul 2003 10:39 in [EMAIL PROTECTED], 2trax ([EMAIL PROTECTED]) wrote: It seems to me that the most flexible way is to use a text field to hold everything, apart from the country which suits a varchar? and perhaps have another dedicated varchar field to hold USA ZIP codes / UK post codes for easy searching? There are ISO standard codes for countries: US, CA, GB, AU, etc. These each fit into a fixed-width CHAR(2) field. I would use a VARCHAR of some large size and map it into structured fields using views. Each view is built with WHERE country_code = 'xx' for the matching 'xx' in the ISO standard. This will allow you to search in a structured manner, provided you have the country code. David, Thanks for a good suggestion. I'll take a crack at it. Cheers, Sam. --- Posted via news://freenews.netfront.net Complaints to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] multiple insert into's (may be NEWBIE question)
Didn't know if it would be any faster.. Travis -Original Message- From: Ron Johnson [mailto:[EMAIL PROTECTED] Sent: Tuesday, August 05, 2003 3:00 PM To: PgSQL General ML Subject: Re: [GENERAL] multiple insert into's (may be NEWBIE question) On Tue, 2003-08-05 at 14:42, Stephan Szabo wrote: On Tue, 5 Aug 2003, scott.marlowe wrote: On Tue, 5 Aug 2003, Williams, Travis L, NPONS wrote: I have a table (lets say a,text b,text) and I want to insert the data jim,jimmy and trav,travis can I do this with 1 insert into statement instead of 2? Not with the current implementation of insert. There's been some discussion of adding the multiple tuple insert seen in other databases, but I don't think anyone's actually done it or even agreed on exactly how to do it. I'm not sure if SQL 3 covers this, it seems like it hints at it, but I can't read that stuff all that well most of the time. I don't think there's a way right now though, without using some form of copy. Well, you can do it with insert ... select and union. insert into tab select 'jim', 'jimmy' union select 'trav', 'travis'; But the bottom line question is why do it it in the 1st place?. Multiple INSERT commands works like a peach, as does COPY from stdin (thanks again, Jason). -- +-+ | Ron Johnson, Jr.Home: [EMAIL PROTECTED] | | Jefferson, LA USA | | | | I'm not a vegetarian because I love animals, I'm a vegetarian | | because I hate vegetables!| |unknown | +-+ ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Sorting Problem
Danke, Spacibo, gracias, thanks. Tom Lane wrote: Dennis Gearon [EMAIL PROTECTED] writes: You mean in his own local environment? So all his programs, console operations, etc, will have the new encoding? Or 'LANG/LC_ALL' for Posgres specifically? I mean he needs to run initdb with C as the selected locale. It has nothing to do with what environment his other programs run in. regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Column number 4 is out of range 0..3 ?
I just got this message for all the records returned by select * from table, what does this mean? (I don't have any restriction on the values in column 4...) Regards, BTJ --- Bjørn T Johansen (BSc,MNIF) Executive Manager [EMAIL PROTECTED] Havleik Consulting Phone : +47 67 54 15 17 Conradisvei 4 Fax : +47 67 54 13 91 N-1338 Sandvika Cellular : +47 926 93 298 http://www.havleik.no --- The stickers on the side of the box said Supported Platforms: Windows 98, Windows NT 4.0, Windows 2000 or better, so clearly Linux was a supported platform. --- ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] How to prevent vacuum and reindex from deadlocking.
Postgres itself doesn't support nested transactions. Robert Creager wrote: On Mon, 11 Aug 2003 11:05:57 -0400 Tom Lane [EMAIL PROTECTED] said something like: If you really want to rebuild only the one index, I think this will work: begin; lock table tab; reindex index ndx; commit; Figures. It appears that DBD::Pg doesn't supported nested transactions (I was already in a transaction). DBD::Pg::db begin_work failed: Already in a transaction at /tass/bin/importSList.pl line 445. Unfortunatly, it does take longer overall because of the second index on the table, but it always works ;-) 'Nother duh moment, delete the second index, and then it works just fine speed wise. Guess I need to re-examine my index usage... Thanks, Rob ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] One table into two different databases
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Monday 11 August 2003 09:32, Marcelo Soares wrote: Hi all, I would like to know if its possible to have the SAME TABLE into two different databases (but in the same server). Or to create a view of a table of database X at the database Y. I try to find it in Postgres docs, with no success. Trying running in one database with multiple schemas. - -- Jonathan Gardner [EMAIL PROTECTED] Live Free, Use Linux! -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQE/N/QIWgwF3QvpWNwRArJdAKCobIj8Ti/KZ+ORxHM6Qz5f+sM9dACgxIU7 QBtMhtcYwOkxz8x4DqNnt+Q= =sXTu -END PGP SIGNATURE- ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] Could not receive data from server
Hi, Running Postgres 7.2.1 with python/psycopg (python interface for postgres using libpq)/webware application server on Solaris, I get a postgres error could not receive data from server Error 0 when I try to retrieve a large text column (16 K) from a table. The query I use is select col from table. However, if I avoid the webware application server, this seems to work ok. I am just wondering what effect webware could have on the postgres client libraries. The postgres server is on a different machine. It doesnt seem to be a memory related issue as web-ware uses only 5-6 MB on a machine having 256 MB. Thanks.. Amin ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] like performance w/o wildcards.
On Mon, 4 Aug 2003, Joseph Shraibman wrote: Tom Lane wrote: Joseph Shraibman [EMAIL PROTECTED] writes: What percentage of locales have this problem? Does latin1 have this problem? Latin1 is an encoding, not a locale. To a first approximation, I'd say *all* non-C locales have some kind of sorting funny business. OK this clears things up a bit. The locale on my production server (redhat) is set to en_US, which explains why LIKE doesn't use an index. Do I just have to reset the locale environment variable and restart postgres? What might the side effects of that be? Nope, changing locales involves dumping reinitting and restoring. Sorry. :( And what about my original idea, can LIKE be turned into an = when there are no wildcards? It does ... if the index-conversion optimization is enabled at all. Sorry, what is 'index-conversion optimization' and when is it enabled? I don't know what that is either. Tom? ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] types of constraint deferment
On 11 Aug 2003, Ron Johnson wrote: v7.3.3 http://www.postgresql.org/docs/7.3/static/sql-createtable.html Is INITIALLY DEFERRED a modifier of NOT DEFERRABLE? If so, what does it do, since they seem contradictory. It's not allowed. An INITIALLY DEFERRED constraint must not be defined as NOT DEFERRABLE, although an INITIALLY DEFERRED constraint that doesn't specify either is considered DEFERRABLE. The docs refered to are weak on this, care to try a rewrite of those parts? :) ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] PL/C functions
Dennis Gearon [EMAIL PROTECTED] writes: Uuuh, C++ can do it, PHP can do it, JAVA can do it, I just want to know if PL/C can do it for UDF's. We have no animal called PL/C. Kindly be more clear about what your question is. regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Sorting Problem
If you are talking about everything that lies under so called LATIN-1 ( ISO-8859-1 ) en_US encapsulates ( at least suppose to ) all those sorting rules, do not remember about accents tho. -Original Message- From: Kathy zhu [mailto:[EMAIL PROTECTED] Sent: Tuesday, August 12, 2003 4:43 PM To: [EMAIL PROTECTED] Cc: Tom Lane; Tim Edwards; [EMAIL PROTECTED] Subject: Re: [GENERAL] Sorting Problem This brings up another question: Say initdb with en_US locale, and we have localized strings for different languages store in the db. If we have a client in Germany, and want to see the text sorted in german. I mean that we want do db soring for german strings and display the result in the browser. How are we going to handle that ?? thanks, kathy Dennis Gearon wrote: Danke, Spacibo, gracias, thanks. Tom Lane wrote: Dennis Gearon [EMAIL PROTECTED] writes: You mean in his own local environment? So all his programs, console operations, etc, will have the new encoding? Or 'LANG/LC_ALL' for Posgres specifically? I mean he needs to run initdb with C as the selected locale. It has nothing to do with what environment his other programs run in. regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings -- ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Update of foreign key values
Yuup, always name constraints so it's easier to remove them. And if you name them meaningfully, then others might understand why they exist! (or later after a coffeeless morning) Roderick A. Anderson wrote: On Mon, 11 Aug 2003, Stephan Szabo wrote: Yes. Either DEFERRABLE or INITIALLY DEFERRED must be given in order for set constraints to be meaningful. This might be another good place to consider a little clarification (or maybe a doc note in the interactive docs) Phew. I thought I was going bonkers. Then add that I never named the constraints so I was still out'a luck. Rod ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Error message: Ralation X does not have
At 01:40 PM 8/6/2003, Peter Eisentraut wrote: You need to double-quote the names or they will be converted to lower case. Thank you both. The reason why the initial table creation have quoted columns names is that it's generated by a program we use for schema creation. I'm currently adding to it so it can support Postgres too and the way columns are set up was more or less copied from the MS Sql implementation. I'm really not used to databases being so case sensitive, neither Oracle or MS Sql Server is that, but I'll keep this thing in mind moving along. Quite impressed with the response time, or maybe it was just a too easy question :-) Best regards Jan Oksfeldt Jonasen Northmann A/S web: http://www.northmann.com newsgroup: news:news.northmann.com ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] postgres+daemontools
I didn't see daemon tools mentioned anywhere in these. Murthy Kambhampaty wrote: It's multiple postmaster's, each with its own $PGDATA folder and TCP port. See: http://marc.theaimsgroup.com/?l=postgresql-adminw=2r=1s=Run+4+postgresql+ session+on+ONE+server%3Fq=b -Original Message- From: Dennis Gearon [mailto:[EMAIL PROTECTED] Sent: Wednesday, August 06, 2003 11:59 To: Murthy Kambhampaty Cc: 'Kolus Maximiliano'; [EMAIL PROTECTED] Subject: Re: [GENERAL] postgres+daemontools Is this multiple postmasters agains the SAME database? Or against multiple copies? Murthy Kambhampaty wrote: We recently started using daemontools to manage the herd of postmasters (see the recent thread regarding running multiple postmasters on a single machine), and its working quite well. We still have to figure out how to mimic the pg_ctl stop -m fast|immediate modes, svc -d /service/server name mimics pg_ctl stop -m smart which may be all we need (I guess svc -i /service/server name; svc -d /service/server name effectively gives the fast mode.) Murthy -Original Message- From: Kolus Maximiliano [mailto:[EMAIL PROTECTED] Sent: Wednesday, August 06, 2003 10:28 To: [EMAIL PROTECTED] Subject: [GENERAL] postgres+daemontools I'm about to install postgres on a box that has daemontools on it and I would like to use it. Is there anybody here already running postgres from daemontools? Is it advisable? Do you have any problems or issues I should took care before trying it? Thanks in advance. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Sorting Problem
On Tue, 12 Aug 2003, Maksim Likharev wrote: If you are talking about everything that lies under so called LATIN-1 ( ISO-8859-1 ) en_US encapsulates ( at least suppose to ) all those sorting rules, do not remember about accents tho. It does not work like that. Different countries in europe that all use latin1 use different sort order for the same characters. There is no way to have one set of rules work for all. If we take swedish for example, then v and w is treatead as the same letter when sorting. So in a dictionary all words beginning with v or w is mixed together. In the future we need indexes that depend on the locale (and a lot of other changes). -- /Dennis ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] 7.4Beta1 failed to create socket: Address family not
My original commit had a message stating it was an IPv6 and the kernel didn't support it. I don't see that message in CVS anymore, but I think we need something similar. There was a big discussion over whether we should require IPv6 to be enabled individually, and then throw a hard error if IPv6 fails, but at this stage, it seemed best to most to just try IPv6 and soft-fail, while throwing a message in the server logs. --- Tom Lane wrote: Robert Creager [EMAIL PROTECTED] writes: Aug 10 14:11:27 thunder postgres[18613]: [1-1] LOG: failed to create socket: Address family not supported by protocol It's normal for this to happen if you have userland (libc) code that supports IPv6 but your kernel isn't configured to do so. The postmaster will try to create both IPv4 and IPv6 sockets, because getaddrinfo() told it to, but the IPv6 attempt will fail as above. However, I can see that this is going to become a FAQ if we leave the behavior alone. I am wondering if we can suppress this message without making life difficult for people who are trying to debug actual problems in setting up sockets. We could just ignore EAFNOSUPPORT failures, but I'm not sure if there are any cases where such an error would genuinely be interesting. Another possibility is to issue the per-failure messages at a very low level (DEBUG2 maybe) and only LOG when we can't create any socket at all. Perhaps there are better answers. Any ideas? regards, tom lane -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] postgesql-7.3.3 lo_create/lo_open fails
Hi Guys, I'm currenlty involved in a port of our gateway code from a Solaris 8 environment to a HP-UX11.11 incorporating postgresql-7.3.3. We're having some problems with creation and opening of large objects and was hoping that you may be able to shed some light on them. I extracted the LO code into the test program as below and am able to recreate the problems: int main( int argc, char * argv[] ) { char chBytes[] = FILLER; int iLength = strlen(chBytes); char * chTempDB = NULL; chTempDB = getenv (CMN_DBNAME); char *chTempUserName=NULL; chTempUserName = getenv (CMN_DBUSERNAME); PGconn * conn = PQsetdbLogin(NULL, NULL, NULL, NULL, chTempDB, chTempUserName, NULL); /* * check to see that the backend connection was successfully made */ if (PQstatus(conn) == CONNECTION_BAD) { cout Large object connection is bad endl; } PGresult *res = PQexec(conn, begin); PQclear(res); Oid lobjId = lo_creat(conn, INV_READ|INV_WRITE); if (lobjId != -1) { cout Large object ID created : lobjId endl; int lobj_fd = lo_open(conn, lobjId, INV_WRITE); cout Large object file descriptor : lobj_fd endl; if (lobj_fd != -1) { // write the large object int nbytes_out = lo_write(conn, lobj_fd, chBytes, iLength); cout Large object written - no of bytes : nbytes_out endl; (void) lo_close(conn, lobj_fd); } } res = PQexec(conn, end); PQclear(res); PQfinish(conn); } all worked fine on the Solaris box but on the HP the lo_creat returns 0 for the oid - subsequently lo_open returns -1 when an attempt is made to open the object. The configuration used to install postgresql-7.3.3 on the HP environment is as follows: configure CC=/bin/cc AR=/bin/ar CFLAGS=+DA2.0W --without-readline --without-zlib Has anyone come across this problem ? Any help on this would be much appreciated Thanks in advance Colm -- ** * Colm Dobbs Email: [EMAIL PROTECTED] * * Software Engineer Web: www.aepona.com * * Aepona LTD,Interpoint Building, Phone: +44 (0)2890 269186 * * 20-24 York Street, Belfast Fax: +44 (0)2890 269111 * * BT15 1AQ N.Ireland * ** ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] postmaster(s) have high load average
On Sat, Aug 09, 2003 at 05:45:59PM -0300, Claudio Lapidus wrote: Run VACUUM VERBOSE on it; you'll no doubt see that some internal tables such as pg_activity, pg_statistic, and such have a lot of dead tuples. Establishing a connection leads to _some_ DB activity, and probably a dead tuple or two; every time you ANALYZE, you create a bunch of dead tuples since old statistics are killed off. What? Does this mean that it is needed to routinely vacuum system tables too? If so, which is the recommended procedure? On our system we do a vacuum analyze every night to cleanup what happened during the day. Vacuum without a table name does all tables so you don't explicitly need to list them. -- Martijn van Oosterhout [EMAIL PROTECTED] http://svana.org/kleptog/ All that is needed for the forces of evil to triumph is for enough good men to do nothing. - Edmond Burke The penalty good people pay for not being interested in politics is to be governed by people worse than themselves. - Plato pgp0.pgp Description: PGP signature
[GENERAL] Empty Output? How Do I Determine the Character?
I cannot determine what character is stored in a varchar... For instance: thedonnaholics=# select state from mailing_list where rec_num = 7; state --- (1 row) If I then execute: thedonnaholics=# select count(*) from mailing_list where state = ''; count --- 0 (1 row) So, what is in that first row? This also returns 0: thedonnaholics=# select count(*) from mailing_list where state = ' '; count --- 0 (1 row) Any ideas? I'd like to select all rows that contain whatever that first one contains... Hunter ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Tsearch2 custom dictionaries
On Thu, 7 Aug 2003 [EMAIL PROTECTED] wrote: Part1. I have created a dictionary called 'webwords' which checks all words and curtails them to 300 chars (for now) after running make make install I then copied the lib_webwords.so into my $libdir I have run psql mybd dict_webwords.sql Once you did 'psql mybd dict_webwords.sql' you should be able use it :) Test it : select lexize('webwords','some_web_word'); I did test it with select lexize('webwords','some_web_word'); lexize --- {some_web_word} select lexize('webwords','some_400char_web_word'); lexize {some_shortened_web_word} so that bit works, but then I tried SELECT to_tsvector( 'webwords', 'my words' ); Error: No tsearch config Did you read http://www.sai.msu.su/~megera/oddmuse/index.cgi/Gendict yeah, i did read it - its good! should i run: update pg_ts_cfgmap set dict_name='{webwords}'; Part2. snip As the text can be multilingual I don't think stemming is possible? You're right. I'm afraid you need UTF database, but tsearch2 isn't UTF-8 compatible :( My database was created as unicode - does this mean I cannot use tsaerch?! I also need to include many none-standard words in the index such as urls and message ID's contained in the text. What's message ID ? Integer ? it's already recognized by parser. try select * from token_type(); Also, last version of tsearch2 (for 7.3 grab from http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/, for 7.4 - available from CVS) has rather useful function - ts_debug apod=# select * from ts_debug('http://www.sai.msu.su/~megera'); ts_name | tok_type | description | token | dict_name | tsvector -+--+-++---+-- simple | host | Host| www.sai.msu.su | {simple} | 'www. sai.msu.su' simple | lword| Latin word | megera | {simple} | ' megera' (2 rows) I get the feeling that building these indexs will by no means be an easy task so any suggestions will be gratefully recieved! You may write your own parser, at last. Some info about parser API: http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_in_Brief Parser writing...scary stuff :-) Thanks! -- ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Sorting Problem
en_US locale skips? punctuation from sorting index, so in your case RM-791 RM-AV2100 RM-PP401 RM-PP402 RM-PP404 RM-V10 RM-V11 RM-V12 RMT-D10 RMT-D108A RMT-D109A RMT-D116A RMT-V402 == RM791 RMAV2100 RMPP401 RMPP402 RMPP404 RMV10 RMV11 RMV12 RMTD10 RMTD108A RMTD109A RMTD116A RMTV402 -Original Message- From: Kathy zhu [mailto:[EMAIL PROTECTED] Sent: Tuesday, August 12, 2003 3:30 PM To: Tom Lane Cc: Tim Edwards; [EMAIL PROTECTED] Subject: Re: [GENERAL] Sorting Problem Do you mean that soring doesn't work for en_US locale ??? And, does encoding affect sorting at all ?? thanks, kathy Tom Lane wrote: Tim Edwards [EMAIL PROTECTED] writes: When I sort ASC on the varchar I get some strange results. Here a section of data cut after running a sort. It starts with RM- then does RMT- Then goes back for more RM-. Sounds like you're in en_US locale, or at least something other than C locale. Unfortunately this can only be fixed by re-initdb'ing :-( regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) -- ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Sorting Problem
Dennis Gearon wrote: How did you solve the problem .. :-) inlining - most chars are just ascii and there are trivial optimizations that can lead to just as fast as moving 4x the data around. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] remove $ from money field
As others have said, money is deprecated. I used this function to migrate some money data once; you could use it to reformat data if you wanted. CREATE OR REPLACE FUNCTION x_money(money) RETURNS decimal AS ' set moneydata $1 regsub -all {[,$]} $moneydata {} workable return $workable ' language 'pltclu'; Robert Treat On Fri, 2003-08-08 at 20:04, Robby Russell wrote: Should be a simple question. When selecting a field that is of type money, how can I remove the $ when selected? example: $10.00 would return as 10.00 -Robby -- PostgreSQL :: The Enterprise Open Source Database ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] problem with timeofday() function in cvs PostgreSQL
On Fri, 8 Aug 2003, Pavel Stehule wrote: In this version I can't convert returned value to time. timeofday() return text. I need convert to timestamp before. It is normal behavior? I expected timestamp as returned type. It is documented to return a string for historical reasons: http://www.postgresql.org/docs/7.3/static/functions-datetime.html#FUNCTIONS-DATETIME-CURRENT You probably want now() instead of that function. -- /Dennis ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] postmaster(s) have high load average
Martijn van Oosterhout wrote: Have you run VACUUM and/or VACUUM FULL and/or ANALYZE recently? a) yes. I have it run analyze every 30 minutes or 1600 record additions. Records are never updated or deleted so I assume I don't need vacuum. b) It does it even at start up when there are fewer than 100 records in the database. c) Would this even matter for clients that only connect but NEVER make any requests from the database? --Chris On Thu, Aug 07, 2003 at 05:44:05PM -0600, Chris Webster wrote: I have one process which writes a single float into 300 columns once per second. I then run 4 process, from remote computers, to query a small subset of the latest row. I have even commented out everything in the query programs, all they do is sleep, and the associated postmaster still sucks up 15% - 20% CPU. Computer is a P4 /w 1Gig memory, all disk access is local. RH9 /w stock postgresql-7.3 installed. I have searched the documentation and tech site high and low for ideas 17:36:27 up 31 days, 6:07, 13 users, load average: 4.11, 2.48, 1.62 107 processes: 99 sleeping, 8 running, 0 zombie, 0 stopped CPU states: 22.3% user 76.0% system 0.0% nice 0.0% iowait 1.5% idle Mem: 1030408k av, 976792k used, 53616k free, 0k shrd, 178704k buff 715252k actv, 33360k in_d, 22348k in_c Swap: 2048248k av, 91308k used, 1956940k free 589572k cached PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND 23389 cjw 16 0 2896 2752 2132 R18.2 0.2 0:44 0 postmaster 23388 cjw 16 0 2896 2752 2132 S18.0 0.2 0:45 0 postmaster 23391 cjw 16 0 2896 2752 2132 S18.0 0.2 0:43 0 postmaster 23366 cjw 16 0 3788 3644 2560 S17.8 0.3 2:32 0 postmaster 23392 cjw 16 0 2896 2752 2132 R16.2 0.2 0:05 0 postmaster ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] indices and cidr/inet type
On Tue, 05 Aug 2003 18:46:47 -0400 Tom Lane [EMAIL PROTECTED] wrote: Richard Welty [EMAIL PROTECTED] writes: hmmm. where should i go looking for sample code implementing operator classes as an example? contrib. Specifically, I'd suggest looking at contrib/seg, which defines a datatype for interval on the real line and builds a GIST opclass for it. This seems like it'd map very easily to CIDR subnets. very good, thanks. i'll go rooting about for documentation on how GIST works. also, how does 7.4 handle comparison of v6 and v4 addresses? i should think it should map the v4 addresses into the v6 space provided for v4 mapping, but that's not the only thing that might have been done. richard -- Richard Welty [EMAIL PROTECTED] Averill Park Networking 518-573-7592 Java, PHP, PostgreSQL, Unix, Linux, IP Network Engineering, Security ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] ext3 block size
We're running on ext2 on our box, with nightly backups. There's not much in the database we can't recreate from feeds, and we aren't doing financials on it. I've also heard of Reiser having had some problems on SMP systems. Of course, our box hasn't gone down unexpectedly ever, either due to OS / hardware / software crashes etc... On Wed, 6 Aug 2003, DeJuan Jackson wrote: Don't know the answer to your question, but I thought I would just pipe in and say that if this is an SMP (has multiple processors) Linux box you don't want to use ext3!!! I used ext3 on my SMP box here at work and now I can't have children (I guess it would help if I got a wife first)!! But in all seriousness SMP + ext3 = BAD(unpredictable crashes depending loosely related to system load). Wilson A. Galafassi Jr. wrote: hello. my database size is 5GB. what is the block size recommend? thanks wilson ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] 7.4Beta1 failed to create socket: Address family not
Bruce Momjian [EMAIL PROTECTED] writes: There was a big discussion over whether we should require IPv6 to be enabled individually, and then throw a hard error if IPv6 fails, but at this stage, it seemed best to most to just try IPv6 and soft-fail, while throwing a message in the server logs. The real problem is perhaps that the message gives no hint that it's talking about being unable to establish an IPv6 socket. With that hint, perhaps people would realize that it's not a problem. regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] PL/C functions
Tom Lane wrote: 1. There is no provision for a single pg_proc entry to have a variable number of arguments. 2. You can make multiple pg_proc entries referencing the same C function. The C function can find out how many arguments it was actually passed (use PG_NARGS()). So you could make several different pg_proc entries and get the desired effect, at some tedium. Joe Conway has posted a few examples using this approach, IIRC. See contrib/dblink in 7.4beta -- there are several functions using this method, e.g. dblink_connect(). Joe ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] tsearch2 on postgresql 7.3.4
[EMAIL PROTECTED] writes: tsearch2.so is located at /usr/local/pgsql/lib/tsearch2.so which is the same directory as fti.so which works. There may be *an* fti.so there, but it's not necessarily the same one Postgres is using. Try pg_config --pkglibdir to find out where $libdir really points. regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] postmaster(s) have high load average
I have one process which writes a single float into 300 columns once per second. I then run 4 process, from remote computers, to query a small subset of the latest row. I have even commented out everything in the query programs, all they do is sleep, and the associated postmaster still sucks up 15% - 20% CPU. Computer is a P4 /w 1Gig memory, all disk access is local. RH9 /w stock postgresql-7.3 installed. I have searched the documentation and tech site high and low for ideas 17:36:27 up 31 days, 6:07, 13 users, load average: 4.11, 2.48, 1.62 107 processes: 99 sleeping, 8 running, 0 zombie, 0 stopped CPU states: 22.3% user 76.0% system 0.0% nice 0.0% iowait 1.5% idle Mem: 1030408k av, 976792k used, 53616k free, 0k shrd, 178704k buff 715252k actv, 33360k in_d, 22348k in_c Swap: 2048248k av, 91308k used, 1956940k free 589572k cached PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND 23389 cjw 16 0 2896 2752 2132 R18.2 0.2 0:44 0 postmaster 23388 cjw 16 0 2896 2752 2132 S18.0 0.2 0:45 0 postmaster 23391 cjw 16 0 2896 2752 2132 S18.0 0.2 0:43 0 postmaster 23366 cjw 16 0 3788 3644 2560 S17.8 0.3 2:32 0 postmaster 23392 cjw 16 0 2896 2752 2132 R16.2 0.2 0:05 0 postmaster -- --Chris How is it one careless match can start a forest fire, but it takes a whole box to start a campfire? ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: v7.4 on Windows ... (Was: Re: [GENERAL] v7.4 Beta 1 Bundle ...)
It compiles, but does not link because of the missing fork/exec and signals. --- The Hermit Hacker wrote: On Wed, 6 Aug 2003, Robert Treat wrote: depends on what you mean by any. I believe that the standard tarball will compile under windows, but to what extent it will actually run I couldn't say. This release is not intended for native use on windows (that had to be pushed back to 7.5), though it will certainly run via cygwin. Ummm ... will it compile? I thought that the issue with the Windows native port was that it wouldn't yet ... something about fork() vs exec() that Bruce was working on? ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Fatal error: Call to undefined function: pg_connect()
Is it necessary to uninstall previous version of PHP in my system. Which is the better compatible version for Postgre 7.3.2. Can you give any link/info for recompiling PHP Sreedhar - Original Message - From: Jean-Christian Imbeault [EMAIL PROTECTED] To: shreedhar [EMAIL PROTECTED] Sent: Tuesday, August 05, 2003 11:33 AM Subject: Re: [GENERAL] Fatal error: Call to undefined function: pg_connect() shreedhar wrote: Fatal error: Call to undefined function: pg_connect() This is a PHP error. Guessing you re-compile PHP and forgot to include postgres support, or somehow the libraries got moved. You'll probably need to recompile PHP is my best guess. Jean-Christian Imbeault ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Sorting Problem
Dennis Gearon wrote: I agree with all of that except for one caveat: all my reading, and just general off the cuff thinking, says that processing variable width characters SIGNIFICANTLY slows an application. It seems better to PROCESS fixed width characters (1,2,4 byte), and TRANSMIT variable width characters (avoiding the null problem.) I can and have solved that problem. If you can assume utf-8 encoding then there are available to you a bunch o tricks that takes this problem away. The other problem with memory (and hence cache) utilization of a wide char only solution it far more significant. Cache effects are the primary killer for performance in an app like a database. Anyhow, before making any decisions one should do a bunch of analysis. Cheers G ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Sorting Problem
I think the question is how often are you passing data around/storing it _in_ your application and how often are you processing it. --- Dennis Gearon wrote: I agree with all of that except for one caveat: all my reading, and just general off the cuff thinking, says that processing variable width characters SIGNIFICANTLY slows an application. It seems better to PROCESS fixed width characters (1,2,4 byte), and TRANSMIT variable width characters (avoiding the null problem.) Gianni Mariani wrote: Dennis Gearon wrote: Got a link to that section of the standard, or better yet, to a 'interpreted' version of the standard? :-) Stephan Szabo wrote: On Wed, 13 Aug 2003, Dennis Gearon wrote: Dennis Bj?rklund wrote: In the future we need indexes that depend on the locale (and a lot of other changes). I agree. I've been looking at the web on this subject a lot lately. I am **NOT** a microslop fan, but SQL-SERVER even lets a user define a language(maybe encoding) down to the column level! I've been reading on GNU-C and on languages, encoding, and localization. http://pauillac.inria.fr/~lang/hotlist/free/licence/fsf96/drepper/paper-1.html http://h21007.www2.hp.com/dspp/tech/tech_TechSingleTipDetailPage_IDX/1,2366,1222,00.html There are three basic approaches to doing different langauges in computerized text: A/ various adaptations of the 8 bit character set, I.E. the ISO-8859-x series. B/ wide characters This should be how Postgress stores data internally. C/ Multibyte characters This is how Postgress should default to sending data OUT of the application, i.e. to the display or the web, or other system applications SQL has a system for defining character set specifications, collations and such (per column/literal in some cases). We should probably look at it before making decisions on how to do things. I thought UNIX (SCOTM) systems also had a way of being able to define collation order. see: ftp://dkuug.dk/i18n/WG15-collection/locales for a collection of all ISO standardized locales (the WG15 ISO work group's stuff). Do a man localedef on most Linuxen or UNIXen. As for wide characters vs multibyte, there is no clear winner. The right answer DEPENDS on the situation. Wide characters on some platforms are 16 bit which means that when you do Unicode you'll still have problems with surrogate pairs (meaning that it's still multi (wide) char) so you still have all the problems of multi-byte encodings. You could decide to process everything in a PG specific 4 byte wide char and do all text in Unicode but the overhead in processing 4 times the data is quite significant. The other option is to store all data in utf-8 and have all text code become utf-8 aware. I have found in practice that the utf-8 option is significantly easier to implement, 100% Unicode compliant and the best performer (because of reduced memory requirements). The Posix API's for locales are not very good for modern day programs, I'm not sure where the mbr* and the wcr* apis are in the standardization process but if these are not well supported, you're on your own and will need to implement similar functionality from scratch and for that matter, the collation functions all operate on a current locate which is really difficult to work with on multi-locale applications. ---(end of broadcast)--- TIP 8: explain analyze is your friend ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] XML?
What would be really cool (although a lot harder to implement) would be the ability to generate a hierarchical XML document when using foreign key relationships. Trying to tell PG how to format that might be a bit of an issue, though. Jon On Thu, 7 Aug 2003, Gavin M. Roy wrote: Add an echo $returnData right after the pg_Close or it wont output the data... in response to your followup, I doubt it would be hard to implement something similar in java, it's just using the standard pg api to build the xml. Gavin Gavin M. Roy wrote: Here's some PHP code I use to do just that: xml version=1.0 ?php function returnRecord($resultid, $row, $level) { $prepend = ; for ( $y = 0; $y $level; $y++ ) $prepend .= \t; $record = $prepend . RECORD\n; for ( $y = 0; $y pg_NumFields($resultid); $y++ ) { $record .= $prepend . \t . pg_FieldName($resultid, $y) . ; $data = Trim(pg_Result($resultid, $row, $y)); $data = ereg_replace(, amp;, $data); $record .= $data; $record .= / . pg_FieldName($resultid, $y) . \n; } $record .= $prepend . /RECORD\n; return $record; } $conn = pg_Connect(my database connect string); $result = pg_Query($conn, SELECT * FROM MY_TABLE;); $returnData = ; $rows = pg_NumRows($result); for ( $y = 0; $y $rows; $y++ ) $returnData .= returnRecord($result, $y, 1); pg_FreeResult($result); pg_Close($conn); ? /xml Hope this helps, Gavin Bjorn T Johansen wrote: I need to convert recordsets to XML, is there an automatic way to do this in PostgreSQL or a tool I can use? Or do I have to code this manually? Regards, BTJ --- Bjørn T Johansen (BSc,MNIF) Executive Manager [EMAIL PROTECTED] Havleik Consulting Phone : +47 67 54 15 17 Conradisvei 4 Fax : +47 67 54 13 91 N-1338 Sandvika Cellular : +47 926 93 298 http://www.havleik.no --- The stickers on the side of the box said Supported Platforms: Windows 98, Windows NT 4.0, Windows 2000 or better, so clearly Linux was a supported platform. --- ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 8: explain analyze is your friend ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Join faster than single table query
Hi: I must have missed something, but how is it possible that a join on tables A and B is faster (a lot faster) than a query to one of the tables with the same conditions? The problem seems to be with the query plan, in the case os a query to table_a only, the planner executes a Seq Scan, in the case of a join, an Index Scan. table_a has about 4M records, so the difference is quite noticeable. explain select * from table_a where field_1=1 and field_2='20030808' and field_3='963782342'; NOTICE: QUERY PLAN: Seq Scan on table_a (cost=0.00..373661.73 rows=12 width=227) EXPLAIN explain select * FROM table_b, table_a WHERE table_b.field_1 = table_a.field_1 AND table_b.field_3 = table_a.field_3 AND table_b.field_3 in ('963782342') AND table_a.field_2 = '20030808' ; NOTICE: QUERY PLAN: Nested Loop (cost=0.00..317.07 rows=3 width=351) - Seq Scan on table_b (cost=0.00..308.80 rows=1 width=124) - Index Scan using table_a_i01 on table_a (cost=0.00..8.24 rows=2 width=227) EXPLAIN Index on table_a is defined on field_1, field_2 and field_3. Thanks a lot for any help. Ruben. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] PostgreSQL and ACID properties support
On 7 Aug 2003, Neil Zanella wrote: of transactions as opposed to the operations taking place serially? Also, does PostgreSQL fully support SQL99 CHECK constraints? And if the system crashes... Technically no. Check constraints with subselects are not supported. Using a function gets you half the constraint, but doesn't notice modifications to the other table(s) involved. You might need triggers to handle that portion. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] The database is very slow !
I currently have PostgreSQL 7.1 installed on a server with about 700 Mb of RAM. I have many problems of speed with a database I created. For example, it took almost 12 sec to run the query select * from table directly from PostgreSQL, on a table with 4000 records and 60 fields ... And the whole application built on this database is very very slow (some pages take almost 20 seconds to load !) I verifed the indexes, I think they are ok, and I tried to make my queries as short as possible (without select * but with select field1, field2, ...) But anyway, I guess there is a problem of speed directly with the database, because I think that is not normal to need 12 sec to run a query on a table with only 4000 records ... Has anybody an idea ? Thanks Krysto ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Continued mail server problems for all PGSQL mailing lists I'm on
In response to Phil Howard: 207.173.200.206? We aren't using that server for relaying ... we *are* using: 207.173.200.143, which does reverse: %nslookup 207.173.200.143 Server: neptune.hub.org Address: 64.117.224.130 Name:hosting.commandprompt.com Address: 207.173.200.143 Joshua, are there multiple IPs on that box that he might be seeing? Then is 207.173.200.206 a spammer running Sendmail and faking these mailing lists? FWIIW server response through Telnet: 220 localhost.localdomain ESMTP Sendmail 8.11.6/8.11.6; Sat, 9 Aug 2003 16:00:03 -0700 HansH ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] PostgreSQL
Hello, all! I have agood question for PostgreSQL FAQ. How to use string functions (like UPPER()/LOWER()) for non-latin strings? Why UPPER() function doesn't work with my UNICODE PostgreSQL database which contains non-latin characters (like cyrillic)? How to make case insensetive search by text field which contains non-latin characters? Thanks for your answers! Best regards Eugeny
Re: [GENERAL] unexpected chunk number
sorry about dumb ´lenght´ instead ´length´. the length of chunk_data is 255. tnx. Eric - Original Message - From: Eric Anderson Vianet SAO [EMAIL PROTECTED] To: PostgreSQL general [EMAIL PROTECTED] Sent: Friday, August 08, 2003 1:14 PM Subject: Re: [GENERAL] unexpected chunk number it generates following error: ERROR: Function 'lenght(bytea)' does not exist Unable to identify a function that satisfies the given argument types You may need to add explicit typecasts this row is only which has chunk_seq = 8. why couldn´t I simply UPDATE it? tnx Eric - Original Message - From: Tom Lane [EMAIL PROTECTED] To: Eric Anderson Vianet SAO [EMAIL PROTECTED] Cc: PostgreSQL general [EMAIL PROTECTED] Sent: Friday, August 08, 2003 12:43 PM Subject: Re: [GENERAL] unexpected chunk number Eric Anderson Vianet SAO [EMAIL PROTECTED] writes: pg_dump: dumping out the contents of table tbdmovimento pg_dump: ERROR: unexpected chunk number 8 (expected 0) for toast value 6935= 693 Hm. Could we see the results of select chunk_seq,length(chunk_data) from tbdmovimento's toast table where chunk_id = 6935693; If you are not trusting the index then it might be good to try this with enable_indexscan turned off and see if you get the same answers. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Commercial support?
Folks, Josh Berkus [EMAIL PROTECTED] does PostgreSQL support, and he is in SF. I am CC'ing him. What kind of support is requested? -- Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] PostGreSQL - Accessing It
In article [EMAIL PROTECTED], Joshua D. Drake wrote: Michael wrote: I cannot access/create a database from root or any of the other accounts I have setup for RH9. I noticed that there is a user setup for postgres (I guess this part of the postgres install) but cant login as that user. I cant even read or change the password. How do I get at PostGreSql? You need to be the user postgres. Try psql -U postgres template1 . You may also have to initdb the database. I have the same problem as Michael. I've installed postgresql and am running the service, but I can't log in as postgres. I just tried the command line psql -U postgres template1, but I get the message authentication failed for user 'postgres'. The administrator's manual confirms that I first need to connect as this predefined user, but doesn't say what that user's password is. How can I find out the password for user postgres so that I can finally begin working with this database? TIA, -- Mel Roman [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Support contracts (was Re: [GENERAL] Commercial support?)
On Wed, 2003-08-13 at 11:59, Dustin Sallings wrote: On Tuesday, Aug 12, 2003, at 09:55 US/Pacific, Al Hulaton wrote: [snip] We're looking at various databases for production use in our product and pricing the various solutions. My personal preference is postgres, but we're also looking at Oracle and Sybase at this point. One thing we get from them that we do not get from postgres is a support contract. It's the general political CYA thing. If something breaks and we can't figure out what it is, we need someone who can come in and make it better. What we do not need at this point is any type of implementation assistance. We pretty much know what we're doing, but management always likes to assume the people whose salaries they pay aren't capable of doing their jobs. :) I *like* the fact that my company has a support contract with Oracle on our production databases. DBMSs are extremely complicated beasts, and even though I am expert, and can extract the db from most stack dump conditions, there *are* errors that are beyond my knowledge, but can be quickly solved by someone who lives and breathes Rdb internals. -- +---+ | Ron Johnson, Jr.Home: [EMAIL PROTECTED] | | Jefferson, LA USA| | | | Man, I'm pretty. Hoo Hah! | |Johnny Bravo | +---+ ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Sorting Problem
Dennis Gearon [EMAIL PROTECTED] writes: You mean in his own local environment? So all his programs, console operations, etc, will have the new encoding? Or 'LANG/LC_ALL' for Posgres specifically? I mean he needs to run initdb with C as the selected locale. It has nothing to do with what environment his other programs run in. regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] importing db as text files
On Thu, 14 Aug 2003 07:34:55 +1000 Jason Godden [EMAIL PROTECTED] wrote: Hi expect, Best way in my opinion is to use the copy table command. This way Pg will actually 'massage' the data (string escapes and all) for you. I guess we're of the same opinion. I did use the copy table command. I believe the problem is that pg is unable to use the default value when a value is not present. If you have complex files best to use a combo of sed/awk to pre-massage the field delimiters and import the data to a temporary table. In most instances it is best to pipe the data to the psql command using copy table from stdin as from file requires that you are the process owner. Always use a temporary import table and perform validation/further conversion in that table (IMO). eg (from one of my earlier posts) cat unprocesseddata.txt | sed s/\,\/$TAB/g | sed s/\//g | sed s/.$//g | psql -dmydatabase -c copy importtable from stdin delimiter '\t'; The first sed will replace all instances of , with a tab character (I achieve tab in my shell scripts with a line like TAB=$(printf \t)) The second sed will remove the start and end because the first only checks for ,. The third sed will remove the Windows carriage returns (if any - depends on the platform the file was generated from. Note here that my file format never contains any in a field so I can safely run the second sed. If your import file contains these then you will need to change the regex or use awk. Regards, Jason On Thu, 14 Aug 2003 07:14 am, expect wrote: What's the big deal with importing text files? I have a 70 MB file to import and it's been one problem after another. I used the copy command and it appears that it's just not possible. I finally massaged the file into a .sql file and ran that using \i db.sql but that failed too because I overlooked ' in names like D'Adario. The other problem I encountered was that a numeric field had to have data in it, pg would not default to the default value. So instead of massaging all the data again I decided to change the data type for that column. This is my first experience with postgresql and I'm wondering if I should expect to encounter similar pain as I go further into this? So far it's been very painful trying to do what I thought would be easy and what I think should be easy. PostgreSQL 7.3.4 on linux redhat 9 ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Commercial support?
On Wed, 13 Aug 2003, Dustin Sallings wrote: On Tuesday, Aug 12, 2003, at 09:55 US/Pacific, Al Hulaton wrote: That's something we do here at Command Prompt. Phone number's below or there's the guaranteed 4 hour response time form we have on our website. We're looking at various databases for production use in our product and pricing the various solutions. My personal preference is postgres, but we're also looking at Oracle and Sybase at this point. One thing we get from them that we do not get from postgres is a support contract. It's the general political CYA thing. If something breaks and we can't figure out what it is, we need someone who can come in and make it better. What we do not need at this point is any type of implementation assistance. We pretty much know what we're doing, but management always likes to assume the people whose salaries they pay aren't capable of doing their jobs. :) There are actually several companies that provide commercial support for postgresql. Here's two: www.pgsql.com www.commandprompt.com ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] problem with timeofday() function in cvs PostgreSQL
Hello In this version I can't convert returned value to time. timeofday() return text. I need convert to timestamp before. It is normal behavior? I expected timestamp as returned type. regards Pavel Stehule ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] INSERT RULE QUERY ORDER
Justin Tocci [EMAIL PROTECTED] writes: Thanks for the reply Tom, here's the rule that works: CREATE RULE tquotehistory_update AS ON UPDATE TO vtquotehistory DO INSTEAD ( INSERT INTO tquotehistory_log (ID, Item, Quote1, DemandCost1, Quote2, DemandCost2, DueDate, POIntoInmass, Weeks, QuoteSent, Reference, Supplier, TLCProposal, Counter, Notes) VALUES (old.ID, old.Item, old.Quote1, old.DemandCost1, old.Quote2, old.DemandCost2, old.DueDate, old.POIntoInmass, old.Weeks, old.QuoteSent, old.Reference, old.Supplier, old.TLCProposal, old.Counter, old.Notes); UPDATE tquotehistory SET Item = new.Item, Quote1 = new.Quote1, DemandCost1 = new.DemandCost1, Quote2 = new.Quote2, DemandCost2 = new.DemandCost2, DueDate = new.DueDate, POIntoInmass = new.POIntoInmass, Weeks = new.Weeks, QuoteSent = new.QuoteSent, Reference = new.Reference, Supplier = new.Supplier, TLCProposal = new.TLCProposal, Counter = new.Counter, Notes = new.Notes WHERE (tquotehistory.ID = old.ID); ); Switch the order and the INSERT doesn't insert a record into the log, but the UPDATE updates and there is no error. Hm. Am I right in supposing that vtquotehistory is a view on tquotehistory? Does the UPDATE cause the row that was visible in the view to be no longer visible in the view (or at least not matched by the constraints on the original UPDATE command)? If so, that's your problem --- the old references in the INSERT will no longer find any matching row in the view. If your goal is to log operations on tquotehistory, my recommendation is to forget about views and rules and just use a trigger on tquotehistory. Triggers are *way* easier to understand, even if the notation looks worse. regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Continued mail server problems for all PGSQL mailing
On Sat, Aug 09, 2003 at 08:05:58PM -0300, The Hermit Hacker wrote: On Sat, 9 Aug 2003, Phil Howard wrote: Then is 207.173.200.206 a spammer running Sendmail and faking these mailing lists? My first guess is that the machine we are using @ CommandPrompt as a relay point has multiple IPs on it, and the outbound IP is being seen as 206, while we're sending i through 143 ... FWIW, the headers of your email contain the following: From [EMAIL PROTECTED] Sun Aug 10 09:27:21 2003 Received: from (hosting.commandprompt.com) [207.173.200.206] by svana.org with esmtp (Exim 3.35 #1 (Debian)) id 19ld7I-00040e-00; Sun, 10 Aug 2003 09:27:20 +1000 Received: from postgresql.org (developer.postgresql.org [64.117.224.193]) by hosting.commandprompt.com (8.11.6/8.11.6) with ESMTP id h79NRAK09068 for [EMAIL PROTECTED]; Sat, 9 Aug 2003 16:27:13 -0700 Hope this helps, -- Martijn van Oosterhout [EMAIL PROTECTED] http://svana.org/kleptog/ All that is needed for the forces of evil to triumph is for enough good men to do nothing. - Edmond Burke The penalty good people pay for not being interested in politics is to be governed by people worse than themselves. - Plato pgp0.pgp Description: PGP signature
[GENERAL] Update of foreign key values
I have two tables in two databases (Pg 7.2.1 - yes I need to upgrade but there are several other dependencies I have to resolve first) and I need to update one database's tables so they can be merged into the other database's table. I know I can drop the constraints and update the tables (primary key, and foreign key) but was hoping I'd not have to do that. An attempt at using the techniques in Joel Burton's Referential Integrity Tutorial Hacking the Referential Integrity tables was unsuccessful. Is there a method I can use to add 1000 to all the primary and foreign keys in one pass? Searches using Google and the PostgreSQL docs turned up nothing useful to my situation. I see if the table/constraint would have been created differently the updates would have cascaded but that did not happen. TIA, Rod -- Open Source Software - Sometimes you get more than you paid for... ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] locale and encoding
Hi all, What are the relationship/dependencies between encoding and locale settings ?? thanks, kathy ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] Searching Tables
I have 4 tables for example firstname | middlename | lastname | phonenumber Now I am trying to search for multiple entries for example SELECT * FROM phonenumbers; I am trying to use SELECT 'not sure what here' FROM phonenumbers; so that I can find any fields that have multiple information in the phonenumbers field. Thanks, __BRSent using Valuelinx Webmail SystemBRhttp://www.valuelinx.net/BRBRCheck us out for your dial-up and DSLBRneeds. And now offering Nationwide Services!BR__ ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] {REPOST, CLARIFIED} How to recognize PG SQL files?
On Wed, Aug 06, 2003 at 12:55:52PM -0400, Joel Burton wrote: Reposting, with some clarification to my request. Thanks to the several responses I received originally. Yes, I know that a perfectly vaild PGSQL SQL file could contain only ANSI SQL and therefore not be recognized as PG-related. In that case, though, it would be recognized by Vim's ANSI SQL coloring, and given that's all this file contains, that's no problem. ;) However, given that many people edit pg_dump files (which do contain pgsql-isms in most cases), and many other people do use non-ANSI PG features, I figure that I should be able to recognize 95% of the files, and that's a win since it will consistent highlight PG syntax and make it easier to scan files, catch typos, etc. Some easy things: * if a filename ends with .pgsql, it will use PG syntax coloring * if a file contains a comment with pgsql in it in the first few lines, it will use PG syntax coloring * if a file contains the comments that pg_dump puts in a file, it will use PG syntax coloring. I'd still like to catch other cases, and still have the following questions: what features among our extensions are unique to us, and what features are used by other common DBs? People that have more recent experience with MySQL, Oracle, SQLServer, etc. can probably answet this question. Thanks, everyone! - j. I'm writing a syntax mode for PG for Vim (posted an early version earlier today) and would like to have Vim recognize that this is a PG SQL file (rather than a MySQL file or an Oracle file or such). I'm trying to brainstorm what the unique-looking parts of PG's syntax are. These need to be present in PG SQL files ( hopefully not too obscure) but not present in other DB SQL files. The PG manual states how PG differs from SQL standards, but not how it differs from other popular databases. I've used MySQL and Oracle in the past, but not recently, and haven't use DB2 or SQLServer in ages and don't have docs for them anymore. I have a few possible suggestions. Can anyone: * tell me if these are used in other DB systems ( shouldn't be part of my syntax) or * provide other ideas for unique PG syntax My ideas: * \connect * template1 * from pg_ (selecting from a PG system table) * create rule * plpgsql, plperl, plpython, pltcl, pltclu, plruby ( now plphp, too, I suppose! ;) ) * nextval(, currval( I'd love to find something common, like SERIAL or CREATE SEQUENCE or such, but I suspect that other commonly-used databases use these. -- Joel BURTON | [EMAIL PROTECTED] | joelburton.com | aim: wjoelburton Independent Knowledge Management Consultant ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])