[GENERAL] The Best Postgresql Load Balancing Solution
Hi,i was searching for a load balancing solution for postgres, I found some ready to use software like PGCluster, Slony, pgpool and others.It would really be nice if someone knows which one is the best taking in consideration that i have an already running application that i need to load balance.I would also like to know if someone has used Red Hat Cluster and GFS for implementing postgresql Load balancing solution. I have alreay used that for implementing a mail load balancing solution with 2 sendmail servers running on 2 different machines but sharing the same storage.Thanks for any help.Najib Abi FadelSystem AdministratorSaint-Joseph University Get your own web address for just $1.99/1st yr. We'll help. Yahoo! Small Business.
Re: [GENERAL] select unique items in db
a schreef: it returns nill now or nuthing and it is very very very slow This is not much info. What is the actual query you are using now? What are the tables like you are running your query against? I'cant really get a good understanding of your problem. So far I have just given you a general answer on the question how do I select unique items in a database. I Find it odd that a distinct query on a single table would return NULL/nothing. What is that rank function you are using? That you experience a speed drop could have a number of reasons. Using a distinct on a select query could very well decrease perormance, but I can't tell you in what order of magnitude. Regards, Stijn. Stijn Vanroye wrote: a schreef: this doesnt work SELECT DISTINCT on link *, rank(ts_vec, to_tsquery('default', $qtxt)) FROM feed_entry WHERE ts_vec @@ to_tsquery('default', $qtxt) ORDER BY rank(ts_vec, to_tsquery('default', $qtxt)) DESC LIMIT 5 OFFSET 0 can you tell me how to get the DISTINCT elements in LINK thanks Just place your distinct fields between (). example syntax: SELECT DISTINCT ON (field1, field2, fieldn) field1, field4 in your case: SELECT DISTINCT ON (link) *, rank[...] Do remember that if you select different fields than the ones you use in the distinct, the results may not always be what you want. e.g.: suppose you have this data loaded in the a table: id field1 field2 -- -- -- 1 foo fooble 2 bar barble 3 foo wobble 4 woo wibble 5 foo bobble if you select distinct on (field1) * which of the records with foo in field1 is going to be returned 1, 3 or 5? I believe this doesn't nescescarily has to be the same value each time. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] [pgadmin-support] pgAdmin III v1.6 Beta 1 Released
Wow!This really looks good. I especially like the subtile but helpfull improvements to the Query-Window --- that braces-matching really helps!Also the fine change to finish on the maintaince-OK button is a very nice solution. As pgadmin is distributed with PostgreSQL on Win32, maybe a slogan like PostgreSQL 8.2 - the polished release really fits.Harald-- GHUM Harald Massapersuadere et programmare Harald Armin MassaReinsburgstraße 202b70197 Stuttgart0173/9409607-Let's set so double the killer delete select all.
Re: [GENERAL] [pgadmin-support] pgAdmin III v1.6 Beta 1 Released
From: Harald Armin Massa [mailto:[EMAIL PROTECTED] Sent: 15 September 2006 09:21To: Dave PageCc: pgadmin-hackers@postgresql.org; pgsql-general@postgresql.orgSubject: Re: [pgadmin-support] pgAdmin III v1.6 Beta 1 Released Wow! :-) This really looks good. I especially like the subtile but helpfull improvements to the Query-Window --- that braces-matching really helps! Yes, I was quite please with that little mod. The regexp based Find/Replace on there is pretty handy as well. Also the fine change to "finish" on the maintaince-OK button is a very nice solution. As pgadmin is distributed with PostgreSQL on Win32, maybe a slogan like "PostgreSQL 8.2 - the polished release" really fits. :-) Thanks for the feedback. Regards,Dave.
Re: [GENERAL] OK, this is a
On Wed, Sep 13, 2006 at 03:09:17PM +0200, Michelle Konzack wrote: Hi Alvaro, This is a second test with some non US-ACII characters... FWIW, this appeared fine in Mutt, the raw message has the subject split across multiple lines, but Mutt handles that just fine. My guess is someone's MUA truncates the newline when hitting reply. Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [GENERAL] PostgreSQL slammed by PHP creator
On Fri, 2006-09-15 at 00:21 -0500, Ron Johnson wrote: On 09/14/06 21:32, AgentM wrote: On Sep 14, 2006, at 21:25 , Bruce Momjian wrote: I am not going to ask how you got to 135MPH. Obviously he was running MySQL under the hood. No wonder he spun out and crashed into a telephone poll... There's that damn missing fsync again. Unsafe at any speed. -- Ralph Nader on Hemi SQL without fsync. Andy ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] berkley sockets
On Wed, Sep 13, 2006 at 08:40:24PM -0400, J S B wrote: Thanks alot Tony. just wondering if the same can be done with C Not entirely clear what you're doing, but couldn't you do what you want with LISTEN/NOTIFY. That way the daemon connects to the server too and get notified when other clients request it... Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
[GENERAL] ECPG: non-integer constant in group by
I actually have two questions. 1) It seems like the fastest way to find the # of distinct elements in a column is using GROUP BY. With ECPG, if I try EXEC SQL SELECT filenm FROM beamdata GROUP BY filenm; I will get sql error Too few arguments. Why? Can I correct the query to avoid the error message? (sqlca.sqlerrd[2] does contain the # of elements in spite of error) 2) The code below was meant to find the # of distinct elements for many columns, but fails with the message sql error 'non-integer constant in GROUP BY' thrown at the EXEC SQL EXECUTE statement. What is the problem? I suspect it is the definition char *vars[NVARS], but couldn't find any working alternatives. :-| -- #define NVARS 24 int main(int argc, char *argv[]) { int i, n_occ[NVARS]; EXEC SQL BEGIN DECLARE SECTION; char *vars[NVARS] = { filenm, yr, mo, dy, hr, mt, sc, us, stat_id, bmnum, channel, scnflag, cp_id, intt, intt_us, frang, rsep, tfreq, noise, natten, nave, nrang, gsct, isct }; char dbnm[50], *stmt = SELECT ? FROM beamdata GROUP BY ?;; EXEC SQL END DECLARE SECTION; EXEC SQL WHENEVER SQLWARNING SQLPRINT; EXEC SQL WHENEVER SQLERROR SQLPRINT; sprintf(dbnm,%s,argv[1]); EXEC SQL CONNECT TO :dbnm; EXEC SQL PREPARE query FROM :stmt; for (i=0; iNVARS; i++) { EXEC SQL EXECUTE query USING $vars[i], $vars[i]; n_occ[i] = sqlca.sqlerrd[2]; fprintf(stderr,# %s: %d\n, vars[i], n_occ[i]); } EXEC SQL DEALLOCATE PREPARE query; EXEC SQL COMMIT; EXEC SQL DISCONNECT; return 0; } ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Postgresql 7.4 migration to (partially) new disks
Hi, Our main postgresql/jboss/lotus notes server is configured as follows. OS : Debian GNU linux 3.0 PgSQL: 7.4.13 The FS structure of the system has as follows: Filesystem 1k-blocks Used Available Use% Mounted on /dev/sda1 9614116 6528132 2597612 72% / /dev/sdf 10321208 5801628 3995292 60% /raid2 /dev/sdg 6956424 4720060 1882988 72% /raidlog where /dev/sda1 is the boot SCSI disk, while /dev/sdf and /dev/sdg reside on two external EMC logical disks connected with qlogic interfaces. PgSQL is installed at the default location /usr/local/pgsql, data is on the ~ of postgres user : /var/lib/pgsql/data My main DB's (dynacom) data are held in $PGDATA2 location at /raid2/var/lib/postgres-data also the commit log and transaction log directories are linked to: pg_clog - /raidlog/sma/var/lib/pgsql/data/pg_clog and pg_xlog - /raidlog/sma/var/lib/pgsql/data/pg_xlog We have planned to do the following task on this Sunday: Migrate from Debian GNU linux 3.0 to SUSE SLES 9 (thats just a wierd lotus notes requirement), and the sysadms have decided to do that on the same HW, EMC disk arrays, by only replacing the root (/) disk. The normal (safe) way to do that would be by following the normal backup/install/configure/restore path. However (just with any upgrade, and with lotus notes things get really scary at times), there is always the possibility that the whole upgrade procedure holds untill monday morning, when there would be an order from our boss to rollback to the old system, or maybe repeat the same procedure every night of the next days of the week until we succeed in Lotus Notes upgrade! In this scenario,If my new suse pgsql installation was some hours alive at the meantime, i would have to do the whole reverse backup/restore procedure again, and this normally takes several minutes to comlete. The DB is something about 2.5 Gbytes on .sql dump and 6 Gbytes on disk. So one thought passing thru was to keep the alive postgresql dirs without dumps/restores. That is to just retain the whole pgsql directory /var/lib/pgsql/data on both systems, by copying back and fourth, while leaving data $PGDATA2 (/raid2/var/lib/postgres-data) on the same EMC disks. That is no backup restore at all. (Providing ofcourse that the /var/lib/pgsql/data owner/group are also to be setup correctly). Does any one have done anything similar with (long term success), or does anyone sees any potential problems with the later approach? Thanx a lot for any thoughts. -- Achilleas Mantzios ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] ECPG: non-integer constant in group by
On 2006-09-15, Poul Jensen [EMAIL PROTECTED] wrote: 2) The code below was meant to find the # of distinct elements for many columns, but fails with the message sql error 'non-integer constant in GROUP BY' thrown at the EXEC SQL EXECUTE statement. What is the problem? char dbnm[50], *stmt = SELECT ? FROM beamdata GROUP BY ?;; That statement is not legal - parameters can only be used to substitute _data_, not identifiers or syntax. You would have to interpolate the column name directly into the query (remembering to quote it with if needed) rather than use a parameter for it. -- Andrew, Supernews http://www.supernews.com - individual and corporate NNTP services ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Installation with sources or with packages
Hello, I need install postgresql 8.X.X on my debian host. But someone says me I should use sources files for the install and other one says me I should use debian package. Does anybody have some theory or opinion about the installation of postgresql by sources or debian package ? What's the most reliable/best solution to install AND UPDATE the sgbdr during his all life ? thanks for your help. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Installation with sources or with packages
On Fri, Sep 15, 2006 at 01:26:07PM +0200, [EMAIL PROTECTED] wrote: Hello, I need install postgresql 8.X.X on my debian host. But someone says me I should use sources files for the install and other one says me I should use debian package. Does anybody have some theory or opinion about the installation of postgresql by sources or debian package ? I pretty much always install the debian package when possible, because then I know it conforms to the rest of my system, I can install stuff from the postgresql-contrib package, I can compile stuff against the postgresql-dev packages and know it works... Config files in /etc/ for example. However, something I have done is build my own debian packages. Grab latest source, copy debian dir over and run dpkg-buildpackage. I rarely do this but it's a good middleway. Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [GENERAL] OK, this is a
Martijn van Oosterhout wrote: On Wed, Sep 13, 2006 at 03:09:17PM +0200, Michelle Konzack wrote: Hi Alvaro, This is a second test with some non US-ACII characters... FWIW, this appeared fine in Mutt, the raw message has the subject split across multiple lines, but Mutt handles that just fine. My guess is someone's MUA truncates the newline when hitting reply. Like Martijn's MUA for example? Your message has a truncated subject, at least the copy I got from the list (I was not in Cc: so I didn't get the original. Nevermind that I installed the formail trick for supressing duplicates so I would have gotten only a single copy anyway ...) But see the archives: http://archives.postgresql.org/pgsql-general/2006-09/msg00561.php You'll see that your message's subject has been truncated. But see Michelle's: http://archives.postgresql.org/pgsql-general/2006-09/msg00528.php it's not truncated. Why the difference? Not sure. I downloaded the archives' mbox, and I see there that Michelle's subject appears as: Subject: OK, this is a Test =?iso-8859-1?Q?f=FCr_a_?= =?iso-8859-1?Q?Subject_containing_one_non_ASCII_character_ans_some_more_?= =?iso-8859-1?B?bGlrZSDp4fPo4PL05yBbV0FTOiBSZTogW0dFTkVSQUxdIFRoaXMgaXMg?= =?iso-8859-1?Q?a_Reply_to_the_message_=22Re=3A_Majordomo_drops_multi-line?= =?iso-8859-1?Q?_Subject=3A=22_and_I_produce_a_Subject_lomger_then_80_char?= =?iso-8859-1?Q?acters_and_sinc?= =?iso-8859-1?Q?e?= the OP is using mutt like me, whats going on here?] Your message, on the other hand, has this on the archives: Subject: Re: OK, this is =?iso-8859-1?Q?a?= Why didn't it truncate Michelle's but did truncate yours? FYI, I've seen this on my messages (mutt), yours (also mutt), and Bruce's (elm). I find it strange that Michelle's mutt didn't do the same. Maybe it's not Majordomo at all, but I haven't seen it anywhere else (for example on lists I am that are managed with Mailman, on spanish, carrying lots of non-ASCII chars in subjects, I don't think I've ever seen a truncated subject). -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] compressed protocol
Tom Lane wrote: zeljko [EMAIL PROTECTED] writes: Is there any plans (or somebody already working on) to see compressed protocol in 8.2 ? Why bother? Run your session through an ssh tunnel and ask it to compress. hm... tunneling works as expected, but there's no compression ?!? ssh_config contains Compression yes sshd_config (server) contains Compression yes any ? ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Installation with sources or with packages
I prefer Debian packages wherever possible. That will provide you with the easiest manageability in the long run for maintaining updates. It's hard to beat sudo apt-get update sudo apt-get upgrade sudo apt-get autoclean as a complete patch script. However, keep in mind that Debian Sarge (stable) is currently at PostgreSQL 7.3. If you're running Sarge, you'll either have to add Etch (testing) repositories or download the PostgreSQL packages from Etch repositories. Since Etch is nearing release (which could mean anything in the Debian world) I suspect you won't have any major problems even if you do this. -- Brandon Aiken CS/IT Systems Engineer -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of [EMAIL PROTECTED] Sent: Friday, September 15, 2006 7:26 AM To: pgsql-general@postgresql.org Subject: [GENERAL] Installation with sources or with packages Hello, I need install postgresql 8.X.X on my debian host. But someone says me I should use sources files for the install and other one says me I should use debian package. Does anybody have some theory or opinion about the installation of postgresql by sources or debian package ? What's the most reliable/best solution to install AND UPDATE the sgbdr during his all life ? thanks for your help. ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] PostgreSQL slammed by PHP creator
At 11:27 AM 9/14/2006 -0400, Arturo Perez wrote: Hi all, Any response to this: http://www.internetnews.com/dev-news/article.php/3631831http://www.internetnews.com/dev-news/article.php/3631831 The title of the article is funny: Is PHP The Cure For The 'Broken' Web? In my opinion PHP is the main reason why so many websites, apps and servers are broken! The other reason is probably MySQL ;). To paraphrase Rasmus: PHP is completely broken in so many ways. PHP makes wrong things easy, and right things hard. Evidence: addslashes, register_globals and magic quotes. More evidence: PHP Nuke, phpBB, PDO vs PEAR DB. Taking his advice on software is like taking a coprophagist's advice on fine dining. Link. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] PostgreSQL slammed by PHP creator
Lincoln Yeoh wrote: At 11:27 AM 9/14/2006 -0400, Arturo Perez wrote: Hi all, Any response to this: http://www.internetnews.com/dev-news/article.php/3631831 PHP makes wrong things easy, and right things hard. Evidence: addslashes, register_globals and magic quotes. More evidence: PHP Nuke, phpBB, PDO vs PEAR DB. More evidence: the PHP codebase is horrible. It gave me nightmares for weeks when I worked on PL/php. I'm really glad I work on Postgres instead. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] compressed protocol
On Fri, Sep 15, 2006 at 03:38:01PM +0200, zeljko wrote: Tom Lane wrote: zeljko [EMAIL PROTECTED] writes: Is there any plans (or somebody already working on) to see compressed protocol in 8.2 ? Why bother? Run your session through an ssh tunnel and ask it to compress. hm... tunneling works as expected, but there's no compression ?!? ssh_config contains Compression yes sshd_config (server) contains Compression yes How did you determine that there's no compression? If you run ssh -v do you see a line like Enabling compression at level X? -- Michael Fuhr ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] compressed protocol
zeljko wrote: Tom Lane wrote: zeljko [EMAIL PROTECTED] writes: Is there any plans (or somebody already working on) to see compressed protocol in 8.2 ? Why bother? Run your session through an ssh tunnel and ask it to compress. hm... tunneling works as expected, but there's no compression ?!? ssh_config contains Compression yes sshd_config (server) contains Compression yes Huh, how do you _know_ there's no compression? -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] OK, this is a Test für
It is definitvly not the Mailinglist software which cut the Subject lines. My unencoded Subject looks: Subject: OK, this is a Test =?iso-8859-1?Q?f=FCr_a_?= =?iso-8859-1?Q?Subject_containing_one_non_ASCII_character_ans_some_more_?= =?iso-8859-1?B?bGlrZSDp4fPo4PL05yBbV0FTOiBSZTogW0dFTkVSQUxdIFRoaXMgaXMg?= =?iso-8859-1?Q?a_Reply_to_the_message_=22Re=3A_Majordomo_drops_multi-line?= =?iso-8859-1?Q?_Subject=3A=22_and_I_produce_a_Subject_lomger_then_80_char?= =?iso-8859-1?Q?acters_and_sinc?= =?iso-8859-1?Q?e?= the OP is using mutt like me, whats going on here?] And even with mixed encoding type (base64 and quoted-printable) it is correct. Greetings Michelle Am 2006-09-13 15:09:17, schrieb Michelle Konzack: Hi Alvaro, This is a second test with some non US-ACII characters... Since I am in France, I have never seen broken Subject lines. Greetings Michelle Konzack Systemadministrator Tamay Dogan Network Debian GNU/Linux Consultant - END OF REPLIED MESSAGE - -- Linux-User #280138 with the Linux Counter, http://counter.li.org/ # Debian GNU/Linux Consultant # Michelle Konzack Apt. 917 ICQ #328449886 50, rue de Soultz MSM LinuxMichi 0033/6/6192519367100 Strasbourg/France IRC #Debian (irc.icq.com) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] berkley sockets
J S B wrote: What exactly is this LISTEN/NOTIFY? Is it some function in socket programing or some part of postgres? ~Jas Listen/Notify is a means of letting a connected postgresql client know a insert or other event has occurred. It won't help if you need to send a command/message to some other tcp/ip daemon. You set up a rule to use Notify: create rule InsertDetect as on INSERT to notify_test do notify recinsert Then on the client side (if the client has libpq listen capability) you listen for the incoming notification messages and react to them accordingly. -- Tony Caduto AM Software Design http://www.amsoftwaredesign.com Home of PG Lightning Admin for Postgresql Your best bet for Postgresql Administration ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Installation with sources or with packages
Fair enough. I've never done it before except with trivial things that had no dependencies, and I just downloaded the packages with wget. Another option would be to use Ubuntu server. That's kinda Debian de facto, and offers more current packages, IIRC. -- Brandon Aiken CS/IT Systems Engineer -Original Message- From: Weerts, Jan [mailto:[EMAIL PROTECTED] Sent: Friday, September 15, 2006 9:49 AM To: Brandon Aiken; [EMAIL PROTECTED]; pgsql-general@postgresql.org Subject: RE: [GENERAL] Installation with sources or with packages However, keep in mind that Debian Sarge (stable) is currently at PostgreSQL 7.3. If you're running Sarge, you'll either have to add Etch (testing) repositories or download the PostgreSQL packages from Etch repositories. Since Etch is nearing release (which could mean anything in the Debian world) I suspect you won't have any major problems even if you do this. I prefer Debian on my servers too, but running a mixed mode system with packages from stable, testing and possibly unstable will give you major headaches, when software depends on different library versions or even different libraries than those already installed on your machine. Instead try www.backports.org, which offers a lot of backported packages for stable. Right now their top news is :) # I'm going to remove postgresql-8.0 from the backports.org # archive. It's was already removed from Debian, and the last # version of the Debian package which was available is vulnerable # to CVE-2006-2313 and CVE-2006-2314, hence the backport is # also affected. # Please upgrade to the postgresql-8.1 backport. Regards Jan ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] berkley sockets
On Wed, Sep 13, 2006 at 08:40:24PM -0400, J S B wrote: Thanks alot Tony. just wondering if the same can be done with C If you are more comfortable with C, I don't see why you couldn't do the same thing as that PLperl function. It's just a matter of creating a socket and sending a string with a CRLF, and then if the command you send returns anything you just go into a blocking read until you get the response back. Later, -- Tony Caduto AM Software Design http://www.amsoftwaredesign.com Home of PG Lightning Admin for Postgresql Your best bet for Postgresql Administration ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] compressed protocol
Michael Fuhr wrote: On Fri, Sep 15, 2006 at 03:38:01PM +0200, zeljko wrote: Tom Lane wrote: zeljko [EMAIL PROTECTED] writes: Is there any plans (or somebody already working on) to see compressed protocol in 8.2 ? Why bother? Run your session through an ssh tunnel and ask it to compress. hm... tunneling works as expected, but there's no compression ?!? ssh_config contains Compression yes sshd_config (server) contains Compression yes How did you determine that there's no compression? If you run ssh -v do you see a line like Enabling compression at level X? I've connected to server with (tunnel created): ssh -v -L 5400:myserver.ath.cx:5432 [EMAIL PROTECTED] password: *** Now I'm logged in (there are debug1 messages which says that ssh uses zlib compression level 6). Now, I'm sending command ls -al /usr/bin it returns in cca. 3 seconds. logout. Now ssh to server WITHOUT compression same command, returns in 6 seconds. so, conclusion is that COMPRESSION works fine. But, when I try (via tunnel, explained above) psql -p 5400 -h localhost mydatabase it connects and works fine, but there's no compression. Query returns in cca 20 seconds, almost same (maybe 0.5 sec. different) as normal psql connection.Conclusion is that there's no compression of psql stream. Returned data is varchars and integers. Any ideas ? ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] compressed protocol
On Fri, Sep 15, 2006 at 05:37:50PM +0200, zeljko wrote: But, when I try (via tunnel, explained above) psql -p 5400 -h localhost mydatabase it connects and works fine, but there's no compression. Query returns in cca 20 seconds, almost same (maybe 0.5 sec. different) as normal psql connection.Conclusion is that there's no compression of psql stream. Returned data is varchars and integers. That's a tenuous conclusion; it assumes that the data transfer is what's taking all the time. Query planning and execution and client-side processing must also be taken into account. Using a sniffer to observe the amount of data transferred would be a more appropriate test. -- Michael Fuhr ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] compressed protocol
Michael Fuhr wrote: That's a tenuous conclusion; it assumes that the data transfer is Are you trying to say that data won't be compressed ? what's taking all the time. Query planning and execution and client-side processing must also be taken into account. Using a sniffer to observe the amount of data transferred would be a more appropriate test. From client connection: EXPLAIN SELECT sfi,nvi,jmi FROM articles ORDER BY sif; QUERY PLAN Index Scan using articles_pkey on articles (cost=0.00..1031.61 rows=12410 width=42) (1 row) So it returns 12410 rows : sfi = INTEGER, nvi = varchar(50), jmi = varchar(3). Yes, I know that my conclusions are tenuous, but what to say when both tests returns result in approx. amount of time (compressed or not). Using compression on my nTier (kbmMW (ZLIB) ) same query returns in 9 seconds (!=20 secs !). (same client, same server , same query). cheers, zac ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] compressed protocol
On Fri, Sep 15, 2006 at 09:52:04AM -0600, Michael Fuhr wrote: On Fri, Sep 15, 2006 at 05:37:50PM +0200, zeljko wrote: But, when I try (via tunnel, explained above) psql -p 5400 -h localhost mydatabase it connects and works fine, but there's no compression. Query returns in cca 20 seconds, almost same (maybe 0.5 sec. different) as normal psql connection.Conclusion is that there's no compression of psql stream. Returned data is varchars and integers. That's a tenuous conclusion; it assumes that the data transfer is what's taking all the time. Query planning and execution and client-side processing must also be taken into account. Using a sniffer to observe the amount of data transferred would be a more appropriate test. Also, don't discount the amount of time that compressing and decompressing takes. The ls and psql tests aren't necessarily comparable due to differing amounts and characteristics of data. I just ran some tests between a couple of boxes on a local network, using psql over a tunneled ssh connection as you are. A sniffer showed that a compressed connection transferred 54% of the amount of data as an uncompressed connection but it took 69% longer to do so. If the network is fast and the boxes are slow then a compressed connection can be a net loser. -- Michael Fuhr ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] compressed protocol
Michael Fuhr wrote: On Fri, Sep 15, 2006 at 09:52:04AM -0600, Michael Fuhr wrote: On Fri, Sep 15, 2006 at 05:37:50PM +0200, zeljko wrote: But, when I try (via tunnel, explained above) psql -p 5400 -h localhost mydatabase it connects and works fine, but there's no compression. Query returns in cca 20 seconds, almost same (maybe 0.5 sec. different) as normal psql connection.Conclusion is that there's no compression of psql stream. Returned data is varchars and integers. That's a tenuous conclusion; it assumes that the data transfer is what's taking all the time. Query planning and execution and client-side processing must also be taken into account. Using a sniffer to observe the amount of data transferred would be a more appropriate test. Also, don't discount the amount of time that compressing and decompressing takes. The ls and psql tests aren't necessarily comparable due to differing amounts and characteristics of data. I just ran some tests between a couple of boxes on a local network, using psql over a tunneled ssh connection as you are. A sniffer showed that a compressed connection transferred 54% of the amount of data as an uncompressed connection but it took 69% longer to do so. If the network is fast and the boxes are slow then a compressed connection can be a net loser. I'm testing over DSL line (test server have 256k Upload and 1MB download). Results are based on this connection. Server : PIV 3Ghz HT, 1GB RAM. (DSL D 1MB U 256k) Client : PIV 3Ghz HT, 1GB RAM. (DSL D 3MB U 384k). nTier results shows real compression (faster more than twice). ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Migration
Hello, I heard from a bunch of PHP guys that Postgres is total crap. Can anyone recommend a guide for migrating to MySQL? Thanks! ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] One of our own begins a new life
Hello, Yeah, this is a cross post and it is slightly off topic but IMHO this is important. Tomorrow one of our own, Devrim Gunduz is becoming a man. He is sucking it up, and committing to the cvs repo of project marriage. May the patches reviewers be kind to him! Congratz Devrim, have a good honey moon and we look forward to having you back in a couple of weeks! Sincerely, Joshua D. Drake -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] compressed protocol
On Fri, Sep 15, 2006 at 06:34:48PM +0200, zeljko wrote: I'm testing over DSL line (test server have 256k Upload and 1MB download). Results are based on this connection. Server : PIV 3Ghz HT, 1GB RAM. (DSL D 1MB U 256k) Client : PIV 3Ghz HT, 1GB RAM. (DSL D 3MB U 384k). nTier results shows real compression (faster more than twice). What does a sniffer like tcpdump or wireshark/ethereal show? Let's see how much data is being sent over a compressed vs. uncompressed connection and how long the data transfers are taking. -- Michael Fuhr ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Migration
On Fri, 2006-09-15 at 12:33 -0400, Daniel Corbe wrote: Hello, I heard from a bunch of PHP guys that Postgres is total crap. Can anyone recommend a guide for migrating to MySQL? Someone's been reading that Rasmus interview. OK, I don't have a complete guide, but I think the steps go something like this: 1: Turn off critical thought processes and stop thinking for yourself. 2: Denormalize all your data ahead of time. 3: Stop looking at tweakers.net benchmarks. 4: Install MySQL 3.23.xx. After all, you won't need any fancy features. MyISAM for ALL! 5: Run your server on RAID0 on IDE drives with untested memory and CPU 6: Backup by copying the files out from under the live running MySQL server. 7: Don't test your restores. 8: Buy all your hardware based on the numbers on the packaging. Pay no attention to those people who actually test hardware. They've obviously been bought off. 9: Say goodbye to data coherency... ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Migration
On Fri, Sep 15, 2006 at 12:33:37PM -0400, Daniel Corbe [EMAIL PROTECTED] wrote: I heard from a bunch of PHP guys that Postgres is total crap. Can anyone recommend a guide for migrating to MySQL? Hello slashdot! ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Migration
Daniel, Is anybody paying you to steel the time of all the developers who will rush and answer you ? Unless their troll-alarm will go off, which is quite unusual for this species (the developer) in front of such challenge... Cheers, Csaba. On Fri, 2006-09-15 at 18:33, Daniel Corbe wrote: Hello, I heard from a bunch of PHP guys that Postgres is total crap. Can anyone recommend a guide for migrating to MySQL? Thanks! ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Migration
Sorry, I couldn't resist putting my toll hat on :} On 9/15/06, Bernhard Weisshuhn [EMAIL PROTECTED] wrote: On Fri, Sep 15, 2006 at 12:33:37PM -0400, Daniel Corbe [EMAIL PROTECTED] wrote: I heard from a bunch of PHP guys that Postgres is total crap. Can anyone recommend a guide for migrating to MySQL? Hello slashdot! ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Migration
I think I've already proven (see Scott and Bernhard's replies) that their troll alarms work just fine. Please realize that anyone who took time to reply to this was already checking their E-Mail and being unproductive. Had I not posted this message I may have saved someone say -- 30 seconds. -Daniel On 9/15/06, Csaba Nagy [EMAIL PROTECTED] wrote: Daniel, Is anybody paying you to steel the time of all the developers who will rush and answer you ? Unless their troll-alarm will go off, which is quite unusual for this species (the developer) in front of such challenge... Cheers, Csaba. On Fri, 2006-09-15 at 18:33, Daniel Corbe wrote: Hello, I heard from a bunch of PHP guys that Postgres is total crap. Can anyone recommend a guide for migrating to MySQL? Thanks! ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] ECPG: non-integer constant in group by
On Fri, Sep 15, 2006 at 02:40:49AM -0800, Poul Jensen wrote: 1) It seems like the fastest way to find the # of distinct elements in a column is using GROUP BY. With ECPG, if I try EXEC SQL SELECT filenm FROM beamdata GROUP BY filenm; I will get sql error Too few arguments. Why? Can I correct the query to avoid the error message? (sqlca.sqlerrd[2] does contain the # of elements in spite of error) The problem is that you are selecting a value but you don't say where ecpg should store the result. You're missing the INTO :var part. Joachim -- Joachim Wieland [EMAIL PROTECTED] GPG key available ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] [OT] PHP vs Postgresql argument on Slashdot's front page.
Looks like the PHP vs Postgresql argument is on slashdot.org's front page. Just giving everyone a heads up so they can go and defend postgresql. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] compressed protocol
Michael Fuhr wrote: On Fri, Sep 15, 2006 at 06:34:48PM +0200, zeljko wrote: I'm testing over DSL line (test server have 256k Upload and 1MB download). Results are based on this connection. Server : PIV 3Ghz HT, 1GB RAM. (DSL D 1MB U 256k) Client : PIV 3Ghz HT, 1GB RAM. (DSL D 3MB U 384k). nTier results shows real compression (faster more than twice). What does a sniffer like tcpdump or wireshark/ethereal show? Let's see how much data is being sent over a compressed vs. uncompressed connection and how long the data transfers are taking. I've tried sniffing with ethereal , but there's too much data, this is what ssh with -v command parameter says. (amount of incoming data and compression factor). It shows minor compression with query data. ls -al /usr/lib This is ssh log (compression enabled): debug1: compress incoming: raw data 135639, compressed 20931, factor 0.15 Same query as in my previous posts. This is ssh log (compression enabled): debug1: compress incoming: raw data 682162, compressed 680894, factor 1.00 Query which returns integer field only: debug1: compress incoming: raw data 193514, compressed 192030, factor 0.99 Query which returns varchar(3) only: debug1: compress incoming: raw data 178999, compressed 177505, factor 0.99 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] compressed protocol
Michael Fuhr wrote: What does a sniffer like tcpdump or wireshark/ethereal show? Let's see how much data is being sent over a compressed vs. uncompressed connection and how long the data transfers are taking. ARGH IT WORKS ! :) postgresql.conf contained ssl=on , so ssl=off = speed of light via ssh tunnel :) thanks for your time :) zac ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] PostgreSQL slammed by PHP creator
On Thursday 14. September 2006 17:27, Arturo Perez wrote: Hi all, Any response to this: http://www.internetnews.com/dev-news/article.php/3631831 I submitted the story to Slashdot: http://it.slashdot.org/article.pl?sid=06/09/15/1412249 In the /. discussion, Rasmus Lerdorf himself claims that he was grossly misquoted by the journalist. -- Leif Biberg Kristensen | Registered Linux User #338009 http://solumslekt.org/ | Cruising with Gentoo/KDE ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] basic geographic structure
HelloI need help with the data model of a basic geographic structureCountry State/Region Sub-RegionMost of the countries have a 3 level, so i want to do this_Countrycontry_id..._Regionregion_idcountry_id__Sub-regionsub_region_idregion_id_Clientclient_idsub_region_idMy Question:Some countries do no have 3rd level (Sub-region) what is the best way to manage this when normaly my Client table needs to be linked to the 3er level data (sub_region_id)Thank you, any tips will be appreciatedDario Stay in the know. Pulse on the new Yahoo.com. Check it out.
Re: [GENERAL] basic geographic structure
What about a self-referencing table? Region -- region_id country_id parent_region_id PRIMARY KEY (region_id) FOREIGN KEY (parent_region_id) REFERENCES Region (region_id) ... -- Brandon Aiken CS/IT Systems Engineer From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Dario Borda Sent: Friday, September 15, 2006 2:29 PM To: pgsql-general@postgresql.org Subject: [GENERAL] basic geographic structure Hello I need help with the data model of a basic geographic structure Country State/Region Sub-Region Most of the countries have a 3 level, so i want to do this _ Country contry_id ... _ Region region_id country_id __ Sub-region sub_region_id region_id _ Client client_id sub_region_id My Question: Some countries do no have 3rd level (Sub-region) what is the best way to manage this when normaly my Client table needs to be linked to the 3er level data (sub_region_id) Thank you, any tips will be appreciated Dario Stay in the know. Pulse on the new Yahoo.com. Check it out. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] [OT] PHP vs Postgresql argument on Slashdot's front page.
Karen Hill [EMAIL PROTECTED] writes: Looks like the PHP vs Postgresql argument is on slashdot.org's front page. Just giving everyone a heads up so they can go and defend postgresql. It looks like a discussion thread that doesn't require that much defending be done... -- output = (cbbrowne @ linuxdatabases.info) http://linuxfinances.info/info/nonrdbms.html Sometimes if you have a cappuccino and then try again it will work OK. - Dr. Brian Reid, 1992 Sometimes one cappucino isn't enough. - Marcus J. Ranum ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] compressed protocol
On Fri, Sep 15, 2006 at 07:58:54PM +0200, zeljko wrote: ARGH IT WORKS ! :) postgresql.conf contained ssl=on , so ssl=off = speed of light via ssh tunnel :) So you were doing SSL over an SSH tunnel. That explains why compression didn't seem to be working -- SSH was trying to compress essentially random data (the SSL-encrypted data), and random data doesn't have enough redundancy to compress. -- Michael Fuhr ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] compressed protocol
Michael Fuhr wrote: On Fri, Sep 15, 2006 at 07:58:54PM +0200, zeljko wrote: So you were doing SSL over an SSH tunnel. That explains why compression didn't seem to be working -- SSH was trying to compress essentially random data (the SSL-encrypted data), and random data doesn't have enough redundancy to compress. yep ;) ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] create table as problem
I am getting a syntax error at or near 'as' in this method, can anyone help? create table iclasses (classid serial, subject text, year text, groups text, teacher text, set text ) as select distinct subject,year,groups,teacher,set from interimclasses I need to make a new table with a classid. Any comments are more than welcome. regards garry ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] create table as problem
On Fri, Sep 15, 2006 at 09:42:35PM +0100, garry saddington wrote: I am getting a syntax error at or near 'as' in this method, can anyone help? create table iclasses (classid serial, subject text, year text, groups text, teacher text, set text ) as select distinct subject,year,groups,teacher,set from interimclasses With CREATE TABLE AS you can specify column names but not column types. If you want to create more columns than the query returns then try an ordinary CREATE TABLE followed by INSERT INTO ... SELECT. CREATE TABLE iclasses ( classid serial, subject text, year text, groups text, teacher text, set text ); INSERT INTO iclasses (subject, year, groups, teacher, set) SELECT DISTINCT subject, year, groups, teacher, set FROM interimclasses; -- Michael Fuhr ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] ECPG: non-integer constant in group by
Joachim Wieland wrote: On Fri, Sep 15, 2006 at 02:40:49AM -0800, Poul Jensen wrote: 1) It seems like the fastest way to find the # of distinct elements in a column is using GROUP BY. With ECPG, if I try EXEC SQL SELECT filenm FROM beamdata GROUP BY filenm; I will get sql error Too few arguments. Why? Can I correct the query to avoid the error message? (sqlca.sqlerrd[2] does contain the # of elements in spite of error) The problem is that you are selecting a value but you don't say where ecpg should store the result. You're missing the INTO :var part. Thank you; that makes sense. I'm not sure why it's considered an error though. Documentation says Any SQL command can be run..., and why not. The other problem I posted is worse and is what's really keeping me stuck. Why are my text strings considered unacceptable non-integer constants? Here are the relevant lines of code: #define NVARS 24 EXEC SQL BEGIN DECLARE SECTION; char *vars[NVARS] = { filenm, yr, mo, dy, hr, mt, sc, us, stat_id, bmnum, channel, scnflag, cp_id, intt, intt_us, frang, rsep, tfreq, noise, natten, nave, nrang, gsct, isct }; char *stmt = SELECT ? FROM beamdata GROUP BY ?;; EXEC SQL END DECLARE SECTION; EXEC SQL PREPARE query FROM :stmt; for (i=0; iNVARS; i++) { EXEC SQL EXECUTE query USING :vars[i], :vars[i]; } Fails at EXECUTE with error message: sql error 'non-integer constant in GROUP BY' ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] PostgreSQL slammed by PHP creator
On Thursday 14 September 2006 12:19, Joshua D. Drake wrote: Arturo Perez wrote: Hi all, Any response to this: http://www.internetnews.com/dev-news/article.php/3631831 Well first, your subject line is very incendiary and not necessary... I expect that kind of thing from slashdot, but it isn't needed here. Rasmus did not slam postgres... he ran some tests, look at the output, and then explained his conclusions. This seems perfectly reasonable to me and I think trying to drum up animosity over it is something we should be cracking back on in the postgresql community... all IMHO. That said, in case anyone needs it, here is a benchmark showing postgresql scalability vs mysql. interestingly enough it isnt really a database benchmark, whether this adds or detracts from it's creditability is up to you... http://tweakers.net/reviews/638/4 From the FA: One performance enhancement that Lerdorf suggested based on code analysis was to use MySQL instead of PostgreSQL for the database. If you can fit your problem into what MySQL can handle it's very fast, Lerdorf said. You can gain quite a bit of performance. For the items that MySQL doesn't handle as well as PostgreSQL, Lerdorf noted that some features can be emulated in PHP itself, and you still end up with a net performance boost. Considering the quality piece of software that PHP is, I don't hold much weight to his comment. However, he does carry some umpf in certain circles. Perhaps we should prove him wrong? I'd like to see you do that... here are the slides from his recent talk showing why he came up with the statements he made (http://talks.php.net/show/oscon06/1). Please post the info when you get comparable performance running from PostgreSQL... -- Robert Treat Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq