Re: [GENERAL] Which CMS/Ecommerce/Shopping cart ?
hi , Thanks for several of the links that you guys posted. The issue is not that I am looking for consulting companies who will set up and optimize postgres+software. There are a million small firms that do M*SQL+any CMS work. And I am looking to do that kind of work with clients - but I want to use the best DB out there, which I believe to be postgres. But I find it hard to do it. Clients do not want to engage in full custom s/w development, because they get worried on what happens if we go out of business. I am sure many of you out there, who have bigger clients have different experiences - but this is the truth for most of the business that we see. And most of the existing community or paid software out there, does not play nice with postgres. This vicious cycle can only be broken at the level of pre-packaged web software, which ought to work beautifully out-of-the-box with postgres. There is just no way out of this. What really, really hurts me is this - come Postgres 9.0 you will have the most amazing DB software in the open source community. I (and millions of small time developers like me) wont be able to leverage that - because our clients will still demand insert well known/commercially supported web software, which have no good support for postgres. -Sandeep On Thu, Jul 29, 2010 at 10:54 AM, Joshua D. Drake j...@commandprompt.comwrote: On Thu, 2010-07-29 at 07:04 +0200, Ivan Sergio Borgonovo wrote: BTW up to my memory Django suggest postgres. I haven't seen any benchmark of Django with pg vs mysql. Django was originally developed for Postgres but really, they are wholly different beasts. Joshua D. Drake -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579 Consulting, Training, Support, Custom Development, Engineering http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Dynamic data model, locks and performance
Pierre Thibault wrote: Hello people of the Postgresql world! I am wondering if Postgresql would a great choice for my database needs. I would like to create a db with dynamic data model. I would like to be able to add tables and columns to existing tables while other queries are running. It sounds a bit like the (commercial) ChronicDB add-on that was on the announce list a while ago, http://chronicdb.com/ regards, Yeb Havinga -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Comparison of Oracle and PostgreSQL full text search
On 28/07/10 02:58, Howard Rogers wrote: For what it's worth, I wrote up the performance comparison here: http://diznix.com/dizwell/archives/153 Thanks very much Howard. It might be my schoolboy-physics ability to fit a curve to two data points, but does anyone else think that the second and third graphs look like a sinusoidal variation overlaid on a steadily increasing baseline? -- Richard Huxton Archonet Ltd -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Which CMS/Ecommerce/Shopping cart ?
On Wed, 28 Jul 2010 22:24:07 -0700 Joshua D. Drake j...@commandprompt.com wrote: On Thu, 2010-07-29 at 07:04 +0200, Ivan Sergio Borgonovo wrote: BTW up to my memory Django suggest postgres. I haven't seen any benchmark of Django with pg vs mysql. Django was originally developed for Postgres but really, they are wholly different beasts. You're right. It would be nice to see benchmark of any cms developed with Django on postgresql and mysql. I tried to find benchmark of Plone on postgres vs mysql. I'd tend to think (and I may be wrong) that as a rule of thumb, being everything else equal, mysql is more suited to commodity cms just because it is easier to find coupled with php in hosting (and this reflects on communities etc...). Still it would be nice to put the myth of mysql is better on cms, since they are read most apps, to rest too. But then... there are no popular [anything but php] cms but there are a lot of [anything but php] web framework. You start with a pre-packaged web application that looks like a framework, then you start to do custom code, then you start to have more impedance mismatch... The more you've to code, the more you will prefer a framework and postgres... but if you've coded enough it means you can afford to code your own web application out of a framework and have your own box (no hosting). BTW which one of the example you posted uses ubercart? I'd be curious about how many concurrent operation on the basket does http://www.commandprompt.com/portal/ have... -- Ivan Sergio Borgonovo http://www.webthatworks.it -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Inheritance and trigger/FK propagation
Karsten Hilbert karsten.hilb...@gmx.net wrote in message news:20100728182051.gj2...@hermes.hilbert.loc... On Tue, Jul 27, 2010 at 10:33:19AM +0200, Davor J. wrote: Well... I found it out the hard way :). There are some extra caveats I have come along. There is the very clumsy ALTER TABLE table_name INHERIT(parent_table) which simply presupposes the parent's columns, but doesn't enforce it thereafter? So you can remove an inherited column from the child table when inheritance is made after the child table creation. Anyhow, I thought it could be quite usable for development a row level security system. For example, one could have a table rls_security (rls_owner name, rls_select name, rls_delete name, rls_update name) and a simple trigger: While, as you found out, the trigger won't auto-propagate this approach is still useful ! - let all tables inherit from a base table providing the rls fields - write a generic trigger that accesses the rls fields *only* (the table oid of the child table is available in the parent table row, fortunately, which will help making error messages better) Interesting. - use an external script (or even plpgsql function) to attach said generic trigger to each table - the script does not need to know the list of relevant tables because that can be derived from the schema metadata inside PostgreSQL (they are children of the parent table ;-) For completeness, I think this link (http://projects.nocternity.net/index.py/en/psql-inheritance) provides some scripts you mention. I haven't tested them, but I think they are great to start with. While a bit more cumbersome than (on-demand) trigger propagation it is still a fairly clean and close-to-the-ideal solution. Now if Postgres supported firing triggers on CREATE TABLE (so these scripts could fire auto-magically), then it would have been even closer-to-the-ideal :) Karsten -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Comparison of Oracle and PostgreSQL full text search
Greg Williamson wrote: Our tests -- very much oriented at postGIS found Oracle to be between 5 and 15% _faster_ depending on the specifics of the task. We decided to go with postgres given the price difference (several hundred thousand dollars for Oracle in the configuration we needed vs. zip for postgres -- we already had trained postgres DBAs). Interesting. Do you have information about the versions of PostGIS/GEOS that you were using? Of course we'd be very interested to see examples of test cases with bad performance on the postgis-users list so that we can improve them. ATB, Mark. -- Mark Cave-Ayland - Senior Technical Architect PostgreSQL - PostGIS Sirius Corporation plc - control through freedom http://www.siriusit.co.uk t: +44 870 608 0063 Sirius Labs: http://www.siriusit.co.uk/labs -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] PostgreSQL and distributed transactions
Hello everyone, I saw a question about PostgreSQL and distributed transactions in the mail archives. But it looked a bit old. I am hoping things have changed and hence this mail. We have a database for 'Admin' which will be one PostgreSQL server. We have different servers for our 'Clients'. We did the design that way because clients and related data may run into huge volumes. As far as the application use by clients are concerned, the request will be redirected to any one server (where the particular client's data resides) and we are OK. But the initial setup for the client is done by 'Admin' and in that work-flow, we need distributed transactions. The transaction will start from the 'Admin server, do some inserts on the 'Client' server and then either rollback or commit on both the servers. Is it possible to do this with PostgreSQL? Any helpful links? Thank you, Jayadevan DISCLAIMER: The information in this e-mail and any attachment is intended only for the person to whom it is addressed and may contain confidential and/or privileged material. If you have received this e-mail in error, kindly contact the sender and destroy all copies of the original communication. IBS makes no warranty, express or implied, nor guarantees the accuracy, adequacy or completeness of the information contained in this email or any attachment and is not liable for any errors, defects, omissions, viruses or for resultant loss or damage, if any, direct or indirect. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Inheritance and trigger/FK propagation
On Thu, Jul 29, 2010 at 10:50:02AM +0200, Davor J. wrote: For completeness, I think this link (http://projects.nocternity.net/index.py/en/psql-inheritance) provides some scripts you mention. Very interesting. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] PGDay.EU Logo required!
Hi, We need some help to either design a new logo, or spruce up the existing logo for PGDay.EU 2010. the European PostgreSQL conference. It needs to be related to PostgreSQL in some way of course, and needs to be in colours that will work when converted to black and white or when printed onto a T-Shirt. Otherwise, we're open to pretty much any idea - though of course, the final decision on what will be used will be down to the conference team. Does anyone want to give it a go? I've attached last year's logo in SVG (doesn't display well in some browsers, but does work in Inkscape) and PNG formats, updated with this years info as a starting point. Please let me know if you're going to work on something. Thanks! -- Dave Page EnterpriseDB UK: http://www.enterprisedb.com The Enterprise Postgres Company attachment: pgday.2010.draft2.pngattachment: pgday.2010.draft2.svg -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL and distributed transactions
Jayadevan M jayadevan.maym...@ibsplc.com writes: But the initial setup for the client is done by 'Admin' and in that work-flow, we need distributed transactions. The transaction will start from the 'Admin server, do some inserts on the 'Client' server and then either rollback or commit on both the servers. Is it possible to do this with PostgreSQL? Any helpful links? See about 2 Phase Commit, but you will need a transaction manager I guess : http://www.postgresql.org/docs/current/static/sql-prepare-transaction.html Regards, -- dim -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Need help with full text index configuration
Brian, you have two options: 1. Use your own parser (just modify default) 2. Use replace function, like postgres=# select to_tsvector( replace('qw/er/ty','/',' ')); to_tsvector -- 'er':2 'qw':1 'ty':3 (1 row) Oleg On Wed, 28 Jul 2010, Brian Hirt wrote: I have some data that can be searched, and it looks like the parser is making some assumptions about the data that aren't true in our case and I'm trying to figure out how to exclude a token type. I haven't been able to find the answer to my question so far, so I thought I would ask here. The data I have are english words, and sometimes there are words separated by a / without spaces. The parser finds these things and tokenizes them as files. I'm sure in some situations that's the right assumption, but based on my data, I know there will never be a file name in the column. For example instead of the parser recognizing three asciiword it recognizes one asciiword and one file. I'd like a way to have the / just get parsed as blank. db=# select * from ts_debug('english','maybe five/six'); alias |description| token | dictionaries | dictionary | lexemes ---+---+--++--+ asciiword | Word, all ASCII | maybe| {english_stem} | english_stem | {mayb} blank | Space symbols | | {} | | file | File or path name | five/six | {simple} | simple | {five/six} (3 rows) I thought that maybe I could create a new configuration and drop the file mapping, but that doesn't seem to work either. db=# CREATE TEXT SEARCH CONFIGURATION public.testd ( COPY = pg_catalog.english ); CREATE TEXT SEARCH CONFIGURATION db=# ALTER TEXT SEARCH CONFIGURATION testd DROP MAPPING FOR file; ALTER TEXT SEARCH CONFIGURATION db=# SELECT * FROM ts_debug('testd','mabye five/six'); alias |description| token | dictionaries | dictionary | lexemes ---+---+--++--+- asciiword | Word, all ASCII | mabye| {english_stem} | english_stem | {maby} blank | Space symbols | | {} | | file | File or path name | five/six | {} | | (3 rows) Is there anyway to do this? Thanks for the help in advance. I'm running 8.4.4 Regards, Oleg _ Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru), Sternberg Astronomical Institute, Moscow University, Russia Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(495)939-16-83, +007(495)939-23-83 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Which CMS/Ecommerce/Shopping cart ?
On 7/28/2010 3:06 PM Sandeep Srinivasa wrote: yup I did. The reason why I wanted examples was to amply demonstrate,to clients, that postgresql is viable. It is kinda weird if the only examples I have are restricted to the postgresql _community_ websites themselves. Both xTuple web sites (www.xtuple.com and www.xtuple.org) run on Drupal/Postgres, and our xChange app store runs on ubercart. BTW, we've also integrated Drupal with the CRM and sales functionality in xTuple - www.xtuple.com/webportal. We've had zero issues with Postgres support in Drupal. Cheers, Ned This may sound irrelevant, but please do understand the huge opposition to have anything to do with PG in the whole CMS/e-store community. In fact I even saw a request to eliminate postgresql support in Drupal 7 (that was taken care of by the valiant efforts of the PG community) : http://drupal.org/node/337146 Plus, it would have been interesting to know which version of Drupal, Ubercart, etc was being used for such deployments. Again, it is relevant because of certain (older) benchmarks which denote significantly worse performance because of the suboptimal way that Drupal integrates with Postgresql : http://mikkel.hoegh.org/blog/2008/oct/13/drupal-database-performance-mysql-postgresql/ There has been _nothing_ to disprove the above numbers, ever since - please correct me if I am wrong. What does a person making a case for Postgres do in this situation ? thanks http://mikkel.hoegh.org/blog/2008/oct/13/drupal-database-performance-mysql-postgresql/ On Wed, Jul 28, 2010 at 10:40 PM, Joshua D. Drake j...@commandprompt.com mailto:j...@commandprompt.com wrote: On Wed, 2010-07-28 at 22:37 +0530, Sandeep Srinivasa wrote: Could you point me to any deployments of Drupal + Ubercart + Postgres ? Did you not see the links below? Drupal + Ubercart + a ton of their modules work great. It is what drives: http://www.postgresqlconference.org/ http://www.postgresql.us http://www.fossexperts.com/ http://www.commandprompt.com/portal -- PostgreSQL - XMPP: jdrake(at)jabber(dot)postgresql(dot)org Consulting, Development, Support, Training 503-667-4564 - http://www.commandprompt.com/ The PostgreSQL Company, serving since 1997 -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579 Consulting, Training, Support, Custom Development, Engineering http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Comparison of Oracle and PostgreSQL full text search
On 28 July 2010 02:58, Howard Rogers h...@diznix.com wrote: For what it's worth, I wrote up the performance comparison here: http://diznix.com/dizwell/archives/153 Thanks, very interesting results. I wonder, are the results being sorted by the database? The performance degradation for large numbers of results might be explained by it switching over from an internal to an external sort, in which case tweaking work_mem might make a difference. Of course this is pure speculation without the EXPLAIN ANALYSE output. Regards, Dean -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] How to get geometry enabled Tables form Postgresql/postgis
Dear All, How can i retrieve only spatial enabled tables form the database(Postgresql/PostGIS).Please let me know. I am waiting for your great response. Thanks and Regards, Venkat
Re: [GENERAL] Danger of idiomatic plpgsql loop for merging data
On Wed, Jul 28, 2010 at 5:27 PM, J. Greg Davidson j...@well.com wrote: Hi fellow PostgreSQL hackers, I just got burned by the idiomatic loop documented in the PostgreSQL manual as Example 39-2. Exceptions with UPDATE/INSERT I have now replaced this standard idiom with a safer one described below. What went wrong: It seems that the table I was either inserting into or selecting from had a trigger inserting some associated data which was sometimes raising a unique_violation exception, turning the standard idiom into an infinite loop! My (simplified) old code looked like this: CREATE TABLE foos ( foo_ foo PRIMARY KEY DEFAULT next_foo(); name_ text UNIQUE NOT NULL; ); CREATE OR REPLACE FUNCTION get_foo(text) RETURNS foo AS $$ DECLARE _foo foo; BEGIN LOOP SELECT foo_ INTO _foo FROM foos WHERE name_ = $1; IF FOUND THEN RETURN _foo; END IF; BEGIN INSERT INTO foos(name_) VALUES($1); EXCEPTION WHEN unique_violation THEN -- maybe another thread? END; END LOOP; END; $$ LANGUAGE plpgsql STRICT; My (simplified) new code is longer but more flexible, safer and adds logging: CREATE OR REPLACE FUNCTION old_foo(text) RETURNS foo AS $$ SELECT foo_ FROM foos WHERE name_ = $1 $$ LANGUAGE SQL STRICT; CREATE OR REPLACE FUNCTION new_foo(text) RETURNS foo AS $$ DECLARE this regprocedure := 'new_foo(text)'; _foo foo; BEGIN INSERT INTO foos(name_) VALUES ($1) RETURNING foo_ INTO _foo; RETURN _ref; EXCEPTION WHEN unique_violation THEN -- maybe another thread? RAISE NOTICE '% % unique_violation', this, $1; RETURN NULL; END; $$ LANGUAGE plpgsql STRICT; CREATE OR REPLACE FUNCTION get_foo(text) RETURNS foo AS $$ SELECT COALESCE( old_foo($1), new_foo($1), old_foo($1) ) $$ LANGUAGE sql STRICT; hm. It's theoretically possible, although highly unlikely, in read committed for this to return null. The loop is the *only* way to guarantee insert/update without retrying in the application. I do however like the use of coalesce as a sugary SQL way if doing 'if else'. Also nice use of regprocedure in the logging. However the original problem of the exception handler catching the wrong unique violation is an bug ISTM in your exception handling -- that should have been caught in the cascaded handler and handled/rethrown in such a way as to not loop you. Couple of style notes (my 0.02$). you've obviously put a lot of thought into your style but I found your example a bit hard to follow at first, so I'll present an alternative: *) Highly prefer named input args in pl/pgsql (don't use $1, etc). *) I prefer to suffix composite types _t -- your example would be easier to read: create type foo_t as (...) CREATE TABLE foos ( foo foo_t PRIMARY KEY DEFAULT next_foo(); name_ text UNIQUE NOT NULL; ); *) Don't like the suffix underscore (what does it mean?). *) I personally reserve plurals for arrays not tables. most people don't though so no big deal: create table foo ( foo foo_t, -- you can do this foos foo_t[] ); merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Dynamic data model, locks and performance
On 29/07/10 22:36, Pierre Thibault wrote: Why so? This is something expected by a database used in a constant integration environment. Maybe I did not expressed myself very well. Users are not changing their models all the time. They create new models which mean create new tables and from time to time they add new fields to existing tables just like developers do when they update a schema for new application functionalities. Ah, OK. I read your initial mail as implying much more frequent changes, especially combined with millions of tables. In my last job, I was working with constant database integration. We were created DDL scripts to add new fields and tables live on a SQLSever database in production. Most scripts were executed during to night to reduce the impact on the db. In practice, this may means that a running query will have to wait maybe half a second to get a lock because of such update. Usually, not really more than that. Can I expect similar performance with Postgressql? With a few caveats, yes. The main one: For columns you want to be NOT NULL, you should add new columns as nullable. Then UPDATE the new column to hold any desired default, before issuing an ALTER TABLE ... ALTER COLUMN ... SET NOT NULL. That's because an ALTER TABLE to add a nullable column doesn't have to rewrite the table. An ALTER TABLE to add a NOT NULL column has to immediately rewrite the table to add the default to every record. This is slow, and during this operation ALTER TABLE holds an exclusive lock. By contrast, if you ALTER TABLE to add a nullable column (brief exclusive lock), UPDATE (long much lower-order lock that doesn't conflict with SELECT, INSERT, or UPDATE to unaffected rows) and then finally ALTER TABLE again to add the constraint (a further brief lock) you have greatly reduced lock times. Really, I'd be researching dynamic schema databases, object stores, key/value set stores, etc. Such things are outside my personal experience, though, and I can't really advise you much on technologies. Beware of the latest No-SQL fad platforms, though; you might land up being better off with something older and more stable even if it's less fashionable than CouchDB or whatever is currently in today. Maybe, but, as I said, using a SQL database with the one table by class hierarchy strategy seems to be the way to go for me. I'll take a lot a these options too. Just beware of huge table counts. People have reported issues on the list with truly huge numbers of tables. It's not something that turns up in most regular relational designs, and there are a few things in Pg (like, AFAIK, autovacuum's scanning of tables to vacuum) that scale linearly with table counts. I'm sure it's workable, it just might not be ideal. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Which CMS/Ecommerce/Shopping cart ?
On Thu, 2010-07-29 at 11:57 +0530, Sandeep Srinivasa wrote: What really, really hurts me is this - come Postgres 9.0 you will have the most amazing DB software in the open source community. I (and millions of small time developers like me) wont be able to leverage that - because our clients will still demand insert well known/commercially supported web software, which have no good support for postgres. That is certainly a valid concern with Drupal. However I think you are possibly looking at this the wrong way. If you look at Rails, Django, Turbo Gears, Catalyst, Groovy+Grails they all have excellent PostgreSQL support. What I find is that many PHP people that build software are still very much MySQL folks and yes that is unfortunate. I would note that all your concerns are resolved in Drupal 7. The real question is when they will manage to get that out the door. Sincerely, Joshua D. Drake -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579 Consulting, Training, Support, Custom Development, Engineering http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Which CMS/Ecommerce/Shopping cart ?
On Thu, 2010-07-29 at 08:10 -0400, Ned Lilly wrote: On 7/28/2010 3:06 PM Sandeep Srinivasa wrote: yup I did. The reason why I wanted examples was to amply demonstrate,to clients, that postgresql is viable. It is kinda weird if the only examples I have are restricted to the postgresql _community_ websites themselves. Both xTuple web sites (www.xtuple.com and www.xtuple.org) run on Drupal/Postgres, and our xChange app store runs on ubercart. BTW, we've also integrated Drupal with the CRM and sales functionality in xTuple - www.xtuple.com/webportal. We've had zero issues with Postgres support in Drupal. The issue isn't Drupal. It is modules. There are a lot of popular modules that do not work with PostgreSQL (Lightbox for example). The google checkout module for Ubercart didn't work either until relatively recently. Sincerely, Joshua D. Drake -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579 Consulting, Training, Support, Custom Development, Engineering http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How Big is Too Big for Tables?
P Kishor wrote: On Wed, Jul 28, 2010 at 1:38 PM, Stephen Frost sfr...@snowman.net wrote: * P Kishor (punk.k...@gmail.com) wrote: Three. At least, in my case, the overhead is too much. My data are single bytes, but the smallest data type in Pg is smallint (2 bytes). That, plus the per row overhead adds to a fair amount of overhead. My first reaction to this would be- have you considered aggregating the data before putting it into the database in such a way that you put more than 1 byte of data on each row..? That could possibly reduce the number of rows you have by quite a bit and also reduce the impact of the per-tuple overhead in PG.. each row is half a dozen single byte values, so, it is actually 6 bytes per row (six columns). Hmm six chars - this would not perchance be bio (sequence) or geospacial data? If so then there are specialist lists out there that can help. Also quite a few people use Pg for this data and there are some very neat Pg add ons. Jacqui -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Danger of idiomatic plpgsql loop for merging data
On Thu, Jul 29, 2010 at 10:06 AM, Merlin Moncure mmonc...@gmail.com wrote: On Wed, Jul 28, 2010 at 5:27 PM, J. Greg Davidson j...@well.com wrote: Hi fellow PostgreSQL hackers, I just got burned by the idiomatic loop documented in the PostgreSQL manual as Example 39-2. Exceptions with UPDATE/INSERT I have now replaced this standard idiom with a safer one described below. What went wrong: It seems that the table I was either inserting into or selecting from had a trigger inserting some associated data which was sometimes raising a unique_violation exception, turning the standard idiom into an infinite loop! My (simplified) old code looked like this: CREATE TABLE foos ( foo_ foo PRIMARY KEY DEFAULT next_foo(); name_ text UNIQUE NOT NULL; ); CREATE OR REPLACE FUNCTION get_foo(text) RETURNS foo AS $$ DECLARE _foo foo; BEGIN LOOP SELECT foo_ INTO _foo FROM foos WHERE name_ = $1; IF FOUND THEN RETURN _foo; END IF; BEGIN INSERT INTO foos(name_) VALUES($1); EXCEPTION WHEN unique_violation THEN -- maybe another thread? END; END LOOP; END; $$ LANGUAGE plpgsql STRICT; My (simplified) new code is longer but more flexible, safer and adds logging: CREATE OR REPLACE FUNCTION old_foo(text) RETURNS foo AS $$ SELECT foo_ FROM foos WHERE name_ = $1 $$ LANGUAGE SQL STRICT; CREATE OR REPLACE FUNCTION new_foo(text) RETURNS foo AS $$ DECLARE this regprocedure := 'new_foo(text)'; _foo foo; BEGIN INSERT INTO foos(name_) VALUES ($1) RETURNING foo_ INTO _foo; RETURN _ref; EXCEPTION WHEN unique_violation THEN -- maybe another thread? RAISE NOTICE '% % unique_violation', this, $1; RETURN NULL; END; $$ LANGUAGE plpgsql STRICT; CREATE OR REPLACE FUNCTION get_foo(text) RETURNS foo AS $$ SELECT COALESCE( old_foo($1), new_foo($1), old_foo($1) ) $$ LANGUAGE sql STRICT; hm. It's theoretically possible, although highly unlikely, in read committed for this to return null. The loop is the *only* way to guarantee insert/update without retrying in the application. I do however like the use of coalesce as a sugary SQL way if doing 'if else'. Also nice use of regprocedure in the logging. However the original problem of the exception handler catching the wrong unique violation is an bug ISTM in your exception handling -- that should have been caught in the cascaded handler and handled/rethrown in such a way as to not loop you. Probably better would be to have the unique constraint handler grep sqlerrm to ensure that the violation is coming from the table being merged (otherwise simply rethrow). That's a lot safer/faster than using sub-transaction in every dependent trigger. Also ISTM any infinite loop like that could use a safety mechanism for paranoia purposes. If you _do_ rethrow, make sure to use the version of raise that supports throwing the specific sql error code. The fact that this is so difficult to get right is really an indictment of the SQL language TBH (it was only introduced in 2008). This is a perfectly reasonable thing to want to do but a proper implementation is quite nasty. This is highly requested feature and our workaround is (let's be frank) a kludge. merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Which CMS/Ecommerce/Shopping cart ?
On Thu, 29 Jul 2010 08:52:46 -0700 Joshua D. Drake j...@commandprompt.com wrote: The issue isn't Drupal. It is modules. There are a lot of popular modules that do not work with PostgreSQL (Lightbox for example). The google checkout module for Ubercart didn't work either until relatively recently. I'd say the opposite but I'll wait to test more D7. Core takes ages to agree on what should be done to fix bugs for Postgres without affecting even the feelings of MySQL developers. Modules may have more problems but fixing them is generally trivial and generally upstream is quick to integrate the fix. The problem for core is maintaining your patches till and if they fix the bug. I agree that PHP and MySQL are a perverse match. Still if he plans to deploy stuff as commodity software they are a necessary evil. The problem arise when you're in-between custom and RAD. Anyway more python/django based cms are flourishing... and given Django originally supported DB was Postgres... http://www.django-cms.org/ [1] Migration of Onion from Drupal/Mysql - Django/Postgresql is emblematic. [1] I think I could make a quick benchmark if possible on postgresql and mysql -- Ivan Sergio Borgonovo http://www.webthatworks.it -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Which CMS/Ecommerce/Shopping cart ?
This touches on a question I would love to be able to answer Why is MySQL so much more popular right now, especially in the OpenSource community? As a database I find its architecture with multiple underlying engines and other quirks to be rather dubious. Then there is the issue of commercial licenses and exactly when you must have those and what it will really cost. Yet it is pretty ubiquitous. How come? Why isn't postgresql more on developer's minds when they think of OS databases? Amazon cloud has great scalable MySQL support but apparently not postgreql. Why? Is there something about postgresql that is bugging all these people or what? - samantha On Jul 29, 2010, at 10:16 AM, Ivan Sergio Borgonovo wrote: On Thu, 29 Jul 2010 08:52:46 -0700 Joshua D. Drake j...@commandprompt.com wrote: The issue isn't Drupal. It is modules. There are a lot of popular modules that do not work with PostgreSQL (Lightbox for example). The google checkout module for Ubercart didn't work either until relatively recently. I'd say the opposite but I'll wait to test more D7. Core takes ages to agree on what should be done to fix bugs for Postgres without affecting even the feelings of MySQL developers. Modules may have more problems but fixing them is generally trivial and generally upstream is quick to integrate the fix. The problem for core is maintaining your patches till and if they fix the bug. I agree that PHP and MySQL are a perverse match. Still if he plans to deploy stuff as commodity software they are a necessary evil. The problem arise when you're in-between custom and RAD. Anyway more python/django based cms are flourishing... and given Django originally supported DB was Postgres... http://www.django-cms.org/ [1] Migration of Onion from Drupal/Mysql - Django/Postgresql is emblematic. [1] I think I could make a quick benchmark if possible on postgresql and mysql -- Ivan Sergio Borgonovo http://www.webthatworks.it -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Which CMS/Ecommerce/Shopping cart ?
On Jul 29, 2010, at 10:53 AM, Samantha Atkins wrote: This touches on a question I would love to be able to answer Why is MySQL so much more popular right now, especially in the OpenSource community? As a database I find its architecture with multiple underlying engines and other quirks to be rather dubious. Then there is the issue of commercial licenses and exactly when you must have those and what it will really cost. Yet it is pretty ubiquitous. How come? Why isn't postgresql more on developer's minds when they think of OS databases? Amazon cloud has great scalable MySQL support but apparently not postgreql. Why? Is there something about postgresql that is bugging all these people or what? MySQL is the PHP database. Low rent shared hosting targets primarily people hosting PHP apps. Most PHP apps target MySQL. So hosting companies offer PHP and MySQL. Most PHP apps are aimed to run on low end shared hosting, where the only database available is likely to be MySQL, so PHP apps target MySQL. Another issue is that it's apparently easier to deploy multi-tenant MySQL than PostgreSQL. And yet another is the... lets just say sloppy development style of most PHP coders who've learned to use MySQL, rather than SQL. That maps better onto the sloppy MySQL approach to data integrity than the PostgreSQL one. -00-00 isn't a date and isn't an integer - unless you're a PHP coder using MySQL. These bugs in the apps, and similar ones related to MySQL's rather special approach to SQL, also make it painful to add PostgreSQL support to an existing app that was developed solely to target MySQL. Cheers, Steve -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Which CMS/Ecommerce/Shopping cart ?
Samantha Atkins sjatk...@mac.com writes: Why is MySQL so much more popular right now, especially in the OpenSource community? I think it's strictly historical. The mysql bias you see in so many web tools was established in the late 90s, a time when mysql worked reasonably well (at least according to the mysql developers' notion of reasonably well) whereas postgres was still pretty slow and buggy. It took us a long time to get from the original academically-oriented code to something of real production quality. We're definitely competitive now, but I don't know if we'll ever fully overcome that historical disadvantage. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Which CMS/Ecommerce/Shopping cart ?
On Thu, 2010-07-29 at 14:57 -0400, Tom Lane wrote: Samantha Atkins sjatk...@mac.com writes: Why is MySQL so much more popular right now, especially in the OpenSource community? I think it's strictly historical. The mysql bias you see in so many web tools was established in the late 90s, a time when mysql worked reasonably well (at least according to the mysql developers' notion of reasonably well) whereas postgres was still pretty slow and buggy. It took us a long time to get from the original academically-oriented code to something of real production quality. We're definitely competitive now, but I don't know if we'll ever fully overcome that historical disadvantage. Yeah that is a tough one. I would note though, if you move out of PHP, our community grows astronomically. Sincerely, Joshua D. Drake regards, tom lane -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579 Consulting, Training, Support, Custom Development, Engineering http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Which CMS/Ecommerce/Shopping cart ?
On Thu, 29 Jul 2010 14:57:04 -0400 Tom Lane t...@sss.pgh.pa.us wrote: Samantha Atkins sjatk...@mac.com writes: Why is MySQL so much more popular right now, especially in the OpenSource community? I think it's strictly historical. The mysql bias you see in so many web tools was established in the late 90s, a time when mysql worked reasonably well (at least according to the mysql developers' notion of reasonably well) whereas postgres was still pretty slow and buggy. It took us a long time to get from the original academically-oriented code to something of real production quality. We're definitely competitive now, but I don't know if we'll ever fully overcome that historical disadvantage. How popular is Visual Basic right now? And even if it was more popular than C#... what kind of application would you expect to find that start development right now in VB? -- Ivan Sergio Borgonovo http://www.webthatworks.it -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Which CMS/Ecommerce/Shopping cart ?
On Thu, 2010-07-29 at 21:19 +0200, Ivan Sergio Borgonovo wrote: On Thu, 29 Jul 2010 14:57:04 -0400 Tom Lane t...@sss.pgh.pa.us wrote: Samantha Atkins sjatk...@mac.com writes: Why is MySQL so much more popular right now, especially in the OpenSource community? I think it's strictly historical. The mysql bias you see in so many web tools was established in the late 90s, a time when mysql worked reasonably well (at least according to the mysql developers' notion of reasonably well) whereas postgres was still pretty slow and buggy. It took us a long time to get from the original academically-oriented code to something of real production quality. We're definitely competitive now, but I don't know if we'll ever fully overcome that historical disadvantage. How popular is Visual Basic right now? And even if it was more popular than C#... what kind of application would you expect to find that start development right now in VB? http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html Sincerely, Joshua D. Drake -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579 Consulting, Training, Support, Custom Development, Engineering http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Which CMS/Ecommerce/Shopping cart ?
On 10-07-29 02:57 PM, Tom Lane wrote: Samantha Atkinssjatk...@mac.com writes: Why is MySQL so much more popular right now, especially in the OpenSource community? I think it's strictly historical. The mysql bias you see in so many web tools was established in the late 90s, a time when mysql worked reasonably well (at least according to the mysql developers' notion of reasonably well) whereas postgres was still pretty slow and buggy. It took us a long time to get from the original academically-oriented code to something of real production quality. We're definitely competitive now, but I don't know if we'll ever fully overcome that historical disadvantage. regards, tom lane Postgres also had a reputation of being slow compared to MySQL. This was due to a lot of really poor MySQL vs Postgres benchmarks floating around in the early 2000's. They generally tested stock configurations (MySQL had a less restrictive out of the box configuration) and they tended to test things like how fast can a single client insert/update/delete data from a table. Unsurprisingly, MySQL won, as Postgres imposed all sorts of pesky behind the scenes protection for your data that MySQL didn't worry about. No one really tested it in a way that mattered, which was how the two databases performed under concurrent load, where Postgres won hands down. -- Brad Nicholson 416-673-4106 Database Administrator, Afilias Canada Corp. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Which CMS/Ecommerce/Shopping cart ?
On Thu, Jul 29, 2010 at 10:46 PM, Ivan Sergio Borgonovo m...@webthatworks.it wrote: On Thu, 29 Jul 2010 08:52:46 -0700 Joshua D. Drake j...@commandprompt.com wrote: The issue isn't Drupal. It is modules. There are a lot of popular modules that do not work with PostgreSQL (Lightbox for example). The google checkout module for Ubercart didn't work either until relatively recently. I'd say the opposite but I'll wait to test more D7. Core takes ages to agree on what should be done to fix bugs for Postgres without affecting even the feelings of MySQL developers. Modules may have more problems but fixing them is generally trivial and generally upstream is quick to integrate the fix. The problem for core is maintaining your patches till and if they fix the bug. Both of you are right - as I said it is a vicious circle. For a very long time now (as evidenced by subsequent mails on this thread), a lot of PG enthusiasts tried to understand the reason for MySQL popularity. It isnt that developers are lazy - nor that they are conditioned in the MySQL way. The plain and simple truth is that there are some hugely popular pieces of software out there (that incidentally are written in PHP) and have no other equivalent. Biggest example - Wordpress. Strictly mysql only. If I want to throw together a company blog + mailing list + SEO, I can get it done using Wordpress in a matter of hours. Same for shopping carts, though there is no single canonical software that I can name - all of them are strictly or strongly MySQL only. These are restrictions that cannot (and should not) always be answered with a roll-your-own solution. A very interesting example is Bugzilla - just search for bugzilla install on Google and see how many of them are MySQL centric. This when Postgres is officially supported. I searched and asked on IRC and forums - I got no answer to what works with postgres. I google for stuff - I get no answers. Yet, I do end up with lots of results that claim to show how bad postgres is as compared to mysql, in context of all the popular pieces of software out there. So what am I driving at ? If you have to time to spare or have the inclination, please please put out blog posts, how-tos, guides which relate to usecases like putting up a blog, setting up an online shop, configuring a popular CRM package, etc. rather than pure DB-centric information. Please put information out there on how to configure Drupal, SugarCRM, Trac, Yes, I know to RTFM - but then I see the buzz around MySQL and am swayed. regards Sandeep
Re: [GENERAL] Which CMS/Ecommerce/Shopping cart ?
On 07/29/10 12:52 PM, Sandeep Srinivasa wrote: Biggest example - Wordpress. Strictly mysql only. If I want to throw together a company blog + mailing list + SEO, I can get it done using Wordpress in a matter of hours. Serendipity - http://www.s9y.org - a full featured blog server, in php, that works great with Postgres. BSD licensed, and IMHO, much cleaner internally than Wordpress. MUCH cleaner plugin architecture. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] PostgreSQL keepalives help
Hy, I have a system working with the client-server structure and PostgreSQL 8.4. My problem is that if a client who is editing a record and lose his connection to the server,the TCPIP connection is still considered! So, the record stay allocated for the client in my database. I need the records are free for edit, in few minutes. Therefore I set the KEEPALIVE configuration in my postgresql.conf: tcp_keepalives_idle = 60 # TCP_KEEPIDLE, in seconds; tcp_keepalives_interval = 60 # TCP_KEEPINTVL, in seconds; tcp_keepalives_count TCP_KEEPCNT = 5 #; After making these settings and restart the server, the system continues to function the same way: Just breaks the connection TCPIP after two hours and then deallocates the records in PostgreSQL. I need a efficient and safe way for the PostgreSQL understand that is to break these connections as configured! I need help, urgent! Thanks, Kananda S Silveira - Desenvolvedora .NET / Analista de Banco de Dados STEP SOFTWARE Tecnologia e Projetos LTDA kana...@stepsoftware.com.brmailto:kana...@stepsoftware.com.br / +55(48)8417-8121
Re: [GENERAL] Which CMS/Ecommerce/Shopping cart ?
Shopping carts, company blogs, etc. Popular pieces of software. As common denominators go, that's pretty low. Perhaps what is needed is a dumbed down version of Postgres. My hobby horse. MySQL supports regular expressions... In a [rhymes with rat's ass]. It supports a kind of tinker toy reduced set of regular expressions. But I guess nobody's complaining. I just hope that Postgres keeps having enough support so that it can continue to be developed. Until Apple adopted Unix, it was basically dead in the water. Even then, it was probably iPod's that really kept it alive. But it's still alive. I don't know. John On Jul 29, 2010, at 9:52 PM, Sandeep Srinivasa wrote: If I want to throw together a company blog + mailing list + SEO, I can get it done using Wordpress in a matter of hours. Same for shopping carts, though there is no single canonical software that I can name - all of them are strictly or strongly MySQL only. These are restrictions that cannot (and should not) how bad postgres is as compared to mysql, in context of all the popular pieces of software out there. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] How to improve: performance of query on postgresql 8.3 takes days
Dear postgresql list, I have some troubles generating data for a analysis task at hand. I have a table (table A) containing 5 million records and 28 number of attributes. This table is 461MB big if I copy it to a csv file. I want to create another table (table B) based on the contents of table A plus some 15 extra attributes (in pl/pgsql written functions which produce those extra attributes) So my statement looks like this: create tableB as ( select some attributes, function1(A.attribute1)as attributeX+1, function2(A.attribute1,A.Attribute2,A.attribute3,A.attribute4,A.attribute5) as attribute X+2..function15(A.attribute1,A.attribute9) as attributeX+15 from tableA as A) This takes almost 60 hours to finish on my database server running debian 5.0 with XFS as filesystem containing 4GB RAM. I'm using postgresql server version 8.3 (but am seeing the same phenomena on my FreeBSD 8.0 database server running postgresql 8.4 as well) I arrived at 15 functions because I had 7 or 8 joins in the past and saw that my disk was getting hid and I had heard someplace that RAM is faster so I rewrote those 7 or 8 joins as functions in pl/pgsql. They were just simple lookups, although some of the functions are looking stuff up in tables containing 78000 records. However, I thought this wouldn't be a problem because they are simple functions which look up the value of one variable based on a parameter. 3 of the more special functions are shown here: CREATE OR REPLACE FUNCTION agenttype1(a character) RETURNS integer AS $BODY$ DECLARE i integer; t1_rij canxagents%ROWTYPE; BEGIN select * into t1_rij from canxagents where agent = a; if NOT FOUND THEN i := 0; ELSE if t1_rij.aantal = 0 and t1_rij.aantal =499 THEN i := 1; ELSE if t1_rij.aantal 500 and t1_rij.aantal =1999 THEN i := 2; ELSE if t1_rij.aantal = 2000 THEN i := 3; END IF; END IF; END IF; END IF; return i ; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 100; CREATE OR REPLACE FUNCTION agenttype2(a character) RETURNS integer AS $BODY$ DECLARE i integer; t1_rij showagents%ROWTYPE; BEGIN select * into t1_rij from showagents where agent = a; if NOT FOUND THEN i := 0; ELSE if t1_rij.aantal = 0 and t1_rij.aantal =499 THEN i := 1; ELSE if t1_rij.aantal 500 and t1_rij.aantal =999 THEN i := 2; ELSE if t1_rij.aantal = 1000 THEN i := 3; END IF; END IF; END IF; END IF; return i ; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 100; CREATE OR REPLACE FUNCTION agenttype3(a character) RETURNS integer AS $BODY$ DECLARE i integer; t1_rij noagents%ROWTYPE; BEGIN select * into t1_rij from noagents where agent = a; if NOT FOUND THEN i := 0; ELSE if t1_rij.aantal = 0 and t1_rij.aantal =299 THEN i := 1; ELSE if t1_rij.aantal 300 and t1_rij.aantal =899 THEN i := 2; ELSE if t1_rij.aantal = 900 THEN i := 3; END IF; END IF; END IF; END IF; return i ; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 100; The interesting parts of my postgresql.conf file look like this: #-- # RESOURCE USAGE (except WAL) #-- # - Memory - shared_buffers = 512MB # min 128kB or max_connections*16kB # (change requires restart) temp_buffers = 8MB # min 800kB #max_prepared_transactions = 5 # can be 0 or more # (change requires restart) # Note: Increasing max_prepared_transactions costs ~600 bytes of shared memory # per transaction slot, plus lock space (see max_locks_per_transaction). work_mem = 50MB # min 64kB maintenance_work_mem = 256MB# min 1MB #max_stack_depth = 2MB # min 100kB # - Free Space Map - max_fsm_pages = 153600 # min max_fsm_relations*16, 6 bytes each # (change requires restart) #max_fsm_relations = 1000 # min 100, ~70 bytes each # (change requires restart) # - Kernel Resource Usage - #max_files_per_process = 1000 # min 25 # (change requires restart) #shared_preload_libraries = '' # (change requires restart) # - Cost-Based Vacuum Delay - #vacuum_cost_delay = 0 # 0-1000 milliseconds #vacuum_cost_page_hit = 1 # 0-1 credits #vacuum_cost_page_miss = 10 # 0-1 credits #vacuum_cost_page_dirty = 20
Re: [GENERAL] How to improve: performance of query on postgresql 8.3 takes days
On 07/29/10 2:58 PM, Dino Vliet wrote: Dear postgresql list, I have some troubles generating data for a analysis task at hand. I have a table (table A) containing 5 million records and 28 number of attributes. This table is 461MB big if I copy it to a csv file. I want to create another table (table B) based on the contents of table A plus some 15 extra attributes (in pl/pgsql written functions which produce those extra attributes) So my statement looks like this: create tableB as ( select some attributes, function1(A.attribute1)as attributeX+1, function2(A.attribute1,A.Attribute2,A.attribute3,A.attribute4,A.attribute5) as attribute X+2..function15(A.attribute1,A.attribute9) as attributeX+15 from tableA as A) This takes almost 60 hours to finish on my database server running debian 5.0 with XFS as filesystem containing 4GB RAM. I'm using postgresql server version 8.3 (but am seeing the same phenomena on my FreeBSD 8.0 database server running postgresql 8.4 as well) I arrived at 15 functions because I had 7 or 8 joins in the past and saw that my disk was getting hid and I had heard someplace that RAM is faster so I rewrote those 7 or 8 joins as functions in pl/pgsql. They were just simple lookups, although some of the functions are looking stuff up in tables containing 78000 records. However, I thought this wouldn't be a problem because they are simple functions which look up the value of one variable based on a parameter. 3 of the more special functions are shown here: ... 1. What can I do to let the creation of table B go faster? 2. Do you think the use of indices (but where) would help me? I didn't go that route because in fact I don't have a where clause in the create table B statement. I could put indices on the little tables I'm using in the functions. 3. What about the functions? Should I code them differently? 4. What about my server configuration. What could be done over there? Thanks in advanced certainly your lookup tables should have a index on the key you're using to look up values. without said index, that 78000 row 'little' table will have to be sequentially scanned for every one of your several million rows. with said indexes, you may find that just doing JOINs when you actually use this data rather than creating a new table will work quite nicely. you could use a VIEW to do the joins transparently on the fly. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Comparison of Oracle and PostgreSQL full text search
Greg Williamson wrote: Our tests -- very much oriented at postGIS found Oracle to be between 5 and 15% _faster_ depending on the specifics of the task. We decided to go with postgres given the price difference (several hundred thousand dollars for Oracle in the configuration we needed vs. zip for postgres -- we already had trained postgres DBAs). Can always throw the licensing savings toward larger hardware too; $100K buys a pretty big server nowadays. At the FAA's talk about their internal deployment of PostgreSQL: https://www.postgresqlconference.org/2010/east/talks/faa_airports_gis_and_postgresql They were reporting that some of their difficult queries were dramatically faster on PostgreSQL; I vaguely recall one of them was 100X the speed it ran under Oracle Spatial. It was crazy. As always this sort of thing is very workload dependent. There are certainly queries (such as some of the ones from the TPC-H that big DB vendors optimize for) that can be 100X faster on Oracle too. -- Greg Smith 2ndQuadrant US Baltimore, MD PostgreSQL Training, Services and Support g...@2ndquadrant.com www.2ndQuadrant.us -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Which CMS/Ecommerce/Shopping cart ?
Joshua D. Drake wrote: If you look at Rails, Django, Turbo Gears, Catalyst, Groovy+Grails they all have excellent PostgreSQL support. Exactly. If Ivan were building on a Rails or Java software platform, this discussion of why is PostgreSQL not well supported? wouldn't be happening. The Python stuff is still lagging behind those too a bit I think, but not by much. The PHP/PostgreSQL disconnect is really a mystery to me given that's the only combination that's even got a whole book written on it. -- Greg Smith 2ndQuadrant US Baltimore, MD PostgreSQL Training, Services and Support g...@2ndquadrant.com www.2ndQuadrant.us -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Which CMS/Ecommerce/Shopping cart ?
Samantha Atkins wrote: Amazon cloud has great scalable MySQL support but apparently not postgreql. Why? The perception is that MySQL has good built-in replication usable for scaling up purposes, and therefore is suitable for cloud deployments. Whereas the perception is that PostgreSQL has no such thing. There's enough truth in both positions to keep both perceptions alive, even though neither is completely correct. MySQL replication has some serious limitations, and there are PostgreSQL replication solutions, just not that ship with the software yet. The pending 9.0 with obvious built-in replication is already shifting how people are treating PostgreSQL for cloud deployments, a trend which should just accelerate once the release comes out. -- Greg Smith 2ndQuadrant US Baltimore, MD PostgreSQL Training, Services and Support g...@2ndquadrant.com www.2ndQuadrant.us -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Comparison of Oracle and PostgreSQL full text search
On Thu, Jul 29, 2010 at 5:42 PM, Greg Smith g...@2ndquadrant.com wrote: Greg Williamson wrote: Our tests -- very much oriented at postGIS found Oracle to be between 5 and 15% _faster_ depending on the specifics of the task. We decided to go with postgres given the price difference (several hundred thousand dollars for Oracle in the configuration we needed vs. zip for postgres -- we already had trained postgres DBAs). Can always throw the licensing savings toward larger hardware too; $100K buys a pretty big server nowadays. Hear hear! You can get a quad x 12 core (48 cores total) server with 128G ram and 32 15k6 hard drives for well under $25k nowadays. For $50k or so you can throw 100 hard drives at the problem. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Comparison of Oracle and PostgreSQL full text search
On Thu, Jul 29, 2010 at 10:33 PM, Dean Rasheed dean.a.rash...@gmail.com wrote: On 28 July 2010 02:58, Howard Rogers h...@diznix.com wrote: For what it's worth, I wrote up the performance comparison here: http://diznix.com/dizwell/archives/153 Thanks, very interesting results. I wonder, are the results being sorted by the database? The performance degradation for large numbers of results might be explained by it switching over from an internal to an external sort, in which case tweaking work_mem might make a difference. Of course this is pure speculation without the EXPLAIN ANALYSE output. Regards, Dean Yes, the results were being sorted. I did various tests, changing work_mem, shared_buffers and much else, one by one, until I arrived at the combination of settings that gave me the best 'total search time' results. Personally, I couldn't see any difference in the explain plans, but I was in a bit of a hurry and I may have missed it. For the search term 'woman', which matches 1,590,275 documents, here's the explain plan: Sort (cost=185372.88..185372.93 rows=20 width=312) (actual time=10537.152..10537.154 rows=20 loops=1) Sort Key: a.rf, a.sort_id Sort Method: quicksort Memory: 48kB - Result (cost=109119.55..185372.45 rows=20 width=312) (actual time=4309.020..10537.116 rows=20 loops=1) - Append (cost=109119.55..185372.45 rows=20 width=312) (actual time=4309.018..10537.108 rows=20 loops=1) - Subquery Scan a (cost=109119.55..109119.68 rows=10 width=312) (actual time=4309.018..4309.026 rows=10 loops=1) - Limit (cost=109119.55..109119.58 rows=10 width=641) (actual time=4309.016..4309.019 rows=10 loops=1) - Sort (cost=109119.55..109121.94 rows=957 width=641) (actual time=4309.014..4309.015 rows=10 loops=1) Sort Key: search_rm.sort_id Sort Method: top-N heapsort Memory: 35kB - Bitmap Heap Scan on search_rm (cost=6651.07..109098.87 rows=957 width=641) (actual time=272.851..4021.458 rows=583275 loops=1) Recheck Cond: (to_tsvector('english'::regconfig, (textsearch)::text) @@ '''woman'''::tsquery) Filter: ((bitand(sales_method_code, 1) 0) AND (bitand(subsiter, 1) 0) AND (bitand(filetype, 1) 0)) - Bitmap Index Scan on rmsearch_idx (cost=0.00..6650.83 rows=25826 width=0) (actual time=165.711..165.711 rows=586235 loops=1) Index Cond: (to_tsvector('english'::regconfig, (textsearch)::text) @@ '''woman'''::tsquery) - Subquery Scan b (cost=76252.65..76252.77 rows=10 width=312) (actual time=6228.073..6228.080 rows=10 loops=1) - Limit (cost=76252.65..76252.67 rows=10 width=727) (actual time=6228.072..6228.075 rows=10 loops=1) - Sort (cost=76252.65..76254.29 rows=655 width=727) (actual time=6228.071..6228.072 rows=10 loops=1) Sort Key: search_rf.sort_id Sort Method: top-N heapsort Memory: 38kB - Bitmap Heap Scan on search_rf (cost=5175.18..76238.49 rows=655 width=727) (actual time=363.684..5748.279 rows=1007000 loops=1) Recheck Cond: (to_tsvector('english'::regconfig, (textsearch)::text) @@ '''woman'''::tsquery) Filter: ((bitand(sales_method_code, 1) 0) AND (bitand(subsiter, 1) 0) AND (bitand(filetype, 1) 0)) - Bitmap Index Scan on rfsearch_idx (cost=0.00..5175.02 rows=17694 width=0) (actual time=242.859..242.859 rows=1030282 loops=1) Index Cond: (to_tsvector('english'::regconfig, (textsearch)::text) @@ '''woman'''::tsquery) Total runtime: 10538.832 ms And here's the plan for the search term clover, which matches only 2,808 records in total: - Result (cost=109119.55..185372.45 rows=20 width=312) (actual time=16.807..23.990 rows=20 loops=1) - Append (cost=109119.55..185372.45 rows=20 width=312) (actual time=16.806..23.985 rows=20 loops=1) - Subquery Scan a (cost=109119.55..109119.68 rows=10 width=312) (actual time=16.806..16.812 rows=10 loops=1) - Limit (cost=109119.55..109119.58 rows=10 width=641) (actual time=16.805..16.807 rows=10 loops=1) - Sort (cost=109119.55..109121.94 rows=957 width=641) (actual time=16.804..16.805 rows=10 loops=1) Sort Key: search_rm.sort_id Sort Method: top-N heapsort Memory: 35kB - Bitmap Heap Scan on search_rm (cost=6651.07..109098.87 rows=957 width=641) (actual time=1.054..15.577 rows=1807 loops=1)
Re: [GENERAL] Which CMS/Ecommerce/Shopping cart ?
Brad Nicholson wrote: Postgres also had a reputation of being slow compared to MySQL. This was due to a lot of really poor MySQL vs Postgres benchmarks floating around in the early 2000's. I think more of those were fair than you're giving them credit for. For many common loads, up until PG 8.1 came out--November 8.1--MySQL really was faster. That was the release with the killer read scalability improvements, then 8.3 piled on again with all the write-heavy stuff too. MySQL 4 vs. PG 8.0? MySQL won that fair and square sometimes. -- Greg Smith 2ndQuadrant US Baltimore, MD PostgreSQL Training, Services and Support g...@2ndquadrant.com www.2ndQuadrant.us -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Which CMS/Ecommerce/Shopping cart ?
Samantha Atkins wrote: Why is MySQL so much more popular right now, especially in the OpenSource community? As a database I find its architecture with multiple underlying engines and other quirks to be rather dubious. Then there is the issue of commercial licenses and exactly when you must have those and what it will really cost. Yet it is pretty MySQL was available on Windows long before PostgreSQL. MySQL has always been free for all uses, including commercial use, for ISPs so it quickly became the database that all ISPs/domain hosts provide and, therefore, a popular choice for Web apps. ubiquitous. How come? Why isn't postgresql more on developer's minds when they think of OS databases? Amazon cloud has great scalable MySQL support but apparently not postgreql. Why? Is there something about postgresql that is bugging all these people or what? My guess is that there are a lot of people who know of MySQL who have never heard of PostgreSQL. Also, PostgreSQL does not scale as well on Windows as it does on Linux/Unix. I have talked to people who support 2,000 concurrent users using PostgreSQL on Linux. I have been told that the limit on Windows is about 300 concurrent users. I have no idea how accurate that statement is. I share your surprise because PostgreSQL is has a much more extensive feature set than MySQL. -- .Bill. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Which CMS/Ecommerce/Shopping cart ?
Why is MySQL so much more popular right now, especially in the OpenSource community? This is not true in Japan. PostgreSQL and MySQL has been having even share in many surveys. -- Tatsuo Ishii SRA OSS, Inc. Japan English: http://www.sraoss.co.jp/index_en.php Japanese: http://www.sraoss.co.jp -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Which CMS/Ecommerce/Shopping cart ?
On Fri, Jul 30, 2010 at 3:05 AM, John Gage jsmg...@numericable.fr wrote: Shopping carts, company blogs, etc. Popular pieces of software. As common denominators go, that's pretty low. Perhaps what is needed is a dumbed down version of Postgres. I dont think that is what is required - as I mentioned above, developers who work on php+mysql dont do it because they are dumb. A LOT of clients, specifically ask for it (similarly for J2EE) because there is a huge amount of talent available. Ruby on Rails is getting there. But the MySQL camp has been innovating - Percona, XtraDB, MariaDB - all of them are going beyond the legacy MySQL architecture, yet retaining language compatibility. All of them get the ecosystem for free. What Postgres needs is a dumbed-down ecosystem, rather than the DB itself. As you mentioned above, these are low common denominators, but that denominator makes 90% of usage in the real world. They need help from you guys to make choices which include Postgres. Hang out in the popular software forums and help us in using Postgres + whatever , because there is simply too much FUD about PG and too much love for every other db. regards Sandeep