[GENERAL] good settings for DB parameters such as shared_buffers, checkpoint_segment in Postrgesql 9

2010-11-11 Thread AI Rumman
I am going to install Postgresql 9.0 for my running applicaiton which is at 8.1. My Db size is 3 GB. Server Specification: dual-core 4 cpu RAM: 32 GB OS: Centos What will be good settings for DB parameters such as shared_buffers, checkpoint_segment and etc. Any help please.

Re: [GENERAL] Problem with to_tsquery() after restore on PostgreSQL 9.0.1

2010-11-11 Thread Markus Wollny
Hi! Tom Lane t...@sss.pgh.pa.us writes: So far as I can see offhand, the only way you'd get that error message is if to_tsquery were declared to take OID not regconfig as its first argument. I suspect it has to do with the Tsearch2-compatibility modules from contrib - these were

Re: [GENERAL] good settings for DB parameters such as shared_buffers, checkpoint_segment in Postrgesql 9

2010-11-11 Thread tuanhoanganh
I have same question My Computer is running POS with Postgres 8.9.11 database Ram : 16GB OS : Windows 2008 R2 CPU XEON 2G User : 50-60 user (connect ~ 200 connects, I increase Windows SharedSection=1024,20480,1024 for 125 connects). DISK : RAID 1 What will be good settings for DB

Re: [GENERAL] REINDEX requirement?

2010-11-11 Thread Marc Mamin
Hello, When reindexing we keep the previous and new reltuples/relpages ratio in in a reference table in order to track the reindex efficiency. We also have a maintenance jobs that compare this table with the stats from pg_class and automatically reindex the relations where the ratio degraded

Re: [GENERAL] dblink_get_result issue

2010-11-11 Thread Marc Mamin
For now I just ignore the first exception. BEGIN PERFORM * from dblink_get_result('myconn')as x (t text); EXCEPTION WHEN datatype_mismatch THEN NULL; END; PERFORM * from dblink_get_result('myconn')as x (t text); thanks, Marc Mamin -Original Message- From: Tom Lane

Re: [GENERAL] Considering Solid State Drives

2010-11-11 Thread Marc Mamin
SSD caveats are well described here: http://www.postgresql.org/about/news.1249 HTH, Marc Mamin -Original Message- From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Allan Kamau Sent: Donnerstag, 11. November 2010 07:42 To: Postgres General

[GENERAL] ignore errors for COPY

2010-11-11 Thread Vangelis Katsikaros
Hello I have postrges 8.3.12 and I have the following issue: I have a table create table test( table_id integer, datetime timestamp, MMSI integer, lat real, lng real, ); and I bulk insert data to this table with COPY. A tiny portion of the data in the file are wrong.

Re: [GENERAL] ignore errors for COPY

2010-11-11 Thread Guillaume Lelarge
Le 11/11/2010 13:01, Vangelis Katsikaros a écrit : Hello I have postrges 8.3.12 and I have the following issue: I have a table create table test( table_id integer, datetime timestamp, MMSI integer, lat real, lng real, ); and I bulk insert data to this

Re: [GENERAL] 2PC w/ dblink

2010-11-11 Thread Vick Khera
On Wed, Nov 10, 2010 at 12:39 PM, John R Pierce pie...@hogranch.com wrote: My developers are complaining about the lack of support for 2 phase commit in this scenario.    Can we get any mileage on PREPARE TRANSACTION in a dblink sort of environment like this? Yes, that's an ideal case for

Re: [GENERAL] NOTIFY/LISTEN why is not a callback as notice processing.

2010-11-11 Thread Vick Khera
On Wed, Nov 10, 2010 at 5:20 PM, Tom Lane t...@sss.pgh.pa.us wrote: Not in libpq.  libpq is just a passive library, it can't cause actions to happen when you aren't calling it.  So there's no point in a callback: you might as well just test for occurrences of a NOTIFY at times when you're

Re: [GENERAL] ignore errors for COPY

2010-11-11 Thread Rhys A.D. Stewart
On Thu, Nov 11, 2010 at 8:05 AM, Guillaume Lelarge guilla...@lelarge.info wrote: Le 11/11/2010 13:01, Vangelis Katsikaros a écrit : Hello I have postrges 8.3.12 and I have the following issue: I have a table create table test(      table_id integer,      datetime timestamp,      MMSI

Re: [GENERAL] good settings for DB parameters such as shared_buffers, checkpoint_segment in Postrgesql 9

2010-11-11 Thread Vick Khera
On Thu, Nov 11, 2010 at 2:59 AM, AI Rumman rumman...@gmail.com wrote: Server Specification:   dual-core 4 cpu   RAM: 32 GB   OS: Centos What will be good settings for DB parameters such as shared_buffers, checkpoint_segment and etc. I'll take this one ... :) On my 24GB quad-core Opteron

Re: [GENERAL] good settings for DB parameters such as shared_buffers, checkpoint_segment in Postrgesql 9

2010-11-11 Thread Vick Khera
On Thu, Nov 11, 2010 at 3:30 AM, tuanhoanganh hatua...@gmail.com wrote: My Computer is running POS with Postgres 8.9.11 database  Ram : 16GB  OS : Windows 2008 R2  CPU XEON 2G  User : 50-60 user (connect ~ 200 connects, I increase Windows SharedSection=1024,20480,1024 for 125 connects).  

Re: [GENERAL] Considering Solid State Drives

2010-11-11 Thread Vick Khera
On Thu, Nov 11, 2010 at 1:42 AM, Allan Kamau kamaual...@gmail.com wrote: After googling I found little resent content (including survival statistics) of using SSDs in a write intensive database environment. We use the Texas Memory RAMSan-620 external disk units. It is designed specifically to

Re: [GENERAL] good settings for DB parameters such as shared_buffers, checkpoint_segment in Postrgesql 9

2010-11-11 Thread tv
On Thu, Nov 11, 2010 at 3:30 AM, tuanhoanganh hatua...@gmail.com wrote: My Computer is running POS with Postgres 8.9.11 database Not sure which version is that. There's nothing like 8.9.11 ...  Ram : 16GB  OS : Windows 2008 R2  CPU XEON 2G  User : 50-60 user (connect ~ 200 connects, I

Re: [GENERAL] Considering Solid State Drives

2010-11-11 Thread Radosław Smogura
Hello, When choosing SSD drive you need to consider * number of writes to particular sector which is about 100k to 200k and then sector will fail * in case of DB grow, limitied size of those dirvers. As part of datamining activity. I have some plpgsql functions (executed in parallel, up to 6

Re: [GENERAL] Considering Solid State Drives

2010-11-11 Thread David Siebert
ZFS has an option to use an SSD as cache for the spinning drives. ZFS under Solaris has turned in some really good IO numbers. The problem is with the new Sun I am not feeling so good about the open nature of Solaris. ZFS performance under BSD I have read does not match ZFS under Solaris. On

Re: [GENERAL] good settings for DB parameters such as shared_buffers, checkpoint_segment in Postrgesql 9

2010-11-11 Thread David Fetter
On Thu, Nov 11, 2010 at 08:30:16AM -0500, Vick Khera wrote: On Thu, Nov 11, 2010 at 2:59 AM, AI Rumman rumman...@gmail.com wrote: Server Specification:   dual-core 4 cpu   RAM: 32 GB   OS: Centos What will be good settings for DB parameters such as shared_buffers, checkpoint_segment

Re: [GENERAL] ignore errors for COPY [solved]

2010-11-11 Thread Vangelis Katsikaros
On 11/11/2010 03:05 PM, Guillaume Lelarge wrote: You should better look at pgloader which will use COPY to put your data in your table and found the lines in error. Of course, it takes time to detect lines in error. But at least, all good lines will be in your table, and all bad lines will be

[GENERAL] Schema tool

2010-11-11 Thread Aram Fingal
A while back, I thought I remembered seeing a Mac OS X client for PostgreSQL which had a feature where it would display a graphic schema of whatever database you connect to but I can't seem to find it again (web searching.)I did come across one post which said that this was a planned

Re: [GENERAL] Considering Solid State Drives

2010-11-11 Thread Joachim Worringen
Am 11.11.2010 16:40, schrieb David Siebert: ZFS has an option to use an SSD as cache for the spinning drives. ZFS under Solaris has turned in some really good IO numbers. The problem is with the new Sun I am not feeling so good about the open nature of Solaris. ZFS performance under BSD I have

Re: [GENERAL] Schema tool

2010-11-11 Thread Dmitriy Igrishin
Hey Aram, I recommend dbWrench by Nizana. It has a nice synchronization capabilities, forward / reverse engineering and supports many built-in PostgreSQL types and user-defined types as well. NB: it is commercial application. Another alternative is a MicroOLAP Database Designer. The both tools

Re: [GENERAL] Schema tool

2010-11-11 Thread Rob Sargent
On 11/11/2010 09:50 AM, Aram Fingal wrote: A while back, I thought I remembered seeing a Mac OS X client for PostgreSQL which had a feature where it would display a graphic schema of whatever database you connect to but I can't seem to find it again (web searching.) I did come across

Re: [GENERAL] ignore errors for COPY [solved]

2010-11-11 Thread Guillaume Lelarge
Le 11/11/2010 17:46, Vangelis Katsikaros a écrit : On 11/11/2010 03:05 PM, Guillaume Lelarge wrote: You should better look at pgloader which will use COPY to put your data in your table and found the lines in error. Of course, it takes time to detect lines in error. But at least, all good

Re: [GENERAL] Schema tool

2010-11-11 Thread Guillaume Lelarge
Le 11/11/2010 18:58, Rob Sargent a écrit : On 11/11/2010 09:50 AM, Aram Fingal wrote: A while back, I thought I remembered seeing a Mac OS X client for PostgreSQL which had a feature where it would display a graphic schema of whatever database you connect to but I can't seem to find it

[GENERAL] ipv4 data type does not allow to use % as subnet mask delimiter

2010-11-11 Thread Andrus
Windows uses % as subnet mask delimiter. Trying to use it like create temp table test (test inet) on commit drop; insert into test values('fe80::f1ea:f3f4:fb48:7155%10') returns error ERROR: invalid input syntax for type inet: fe80::f1ea:f3f4:fb48:7155%10 LINE 2: insert into test

Re: [GENERAL] Schema tool

2010-11-11 Thread Marc Mamin
Hello, may schemaspy help you ? http://schemaspy.sourceforge.net/sample/relationships.html HTH, Marc Mamin -Original Message- From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Aram Fingal Sent: Donnerstag, 11. November 2010 17:51 To:

Re: [GENERAL] good settings for DB parameters such as shared_buffers, checkpoint_segment in Postrgesql 9

2010-11-11 Thread Vick Khera
On Thu, Nov 11, 2010 at 11:45 AM, David Fetter da...@fetter.org wrote: max_prepared_transactions = 100 # guideline: same number as max_connections This should be either 0 (no 2PC) or the bounded from below by max_connections. In general, sure. I have one app that uses 2PC, and it makes

Re: [GENERAL] 2PC w/ dblink

2010-11-11 Thread John R Pierce
On 11/11/10 5:17 AM, Vick Khera wrote: On Wed, Nov 10, 2010 at 12:39 PM, John R Piercepie...@hogranch.com wrote: My developers are complaining about the lack of support for 2 phase commit in this scenario.Can we get any mileage on PREPARE TRANSACTION in a dblink sort of environment like

Re: [GENERAL] Schema tool

2010-11-11 Thread Jeff Ross
On 11/11/10 12:45, Marc Mamin wrote: Hello, may schemaspy help you ? http://schemaspy.sourceforge.net/sample/relationships.html HTH, Marc Mamin Thanks for this link! I've been looking for something that can run on the command line for quite a while, and even better it outputs to html and

Re: [GENERAL] Schema tool

2010-11-11 Thread Gary Chambers
Marc, may schemaspy help you ? http://schemaspy.sourceforge.net/sample/relationships.html Thank you *VERY* much for suggesting this tool! -- Gary Chambers /* Nothing fancy and nothing Microsoft! */ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to

Re: [GENERAL] Schema tool

2010-11-11 Thread Aram Fingal
Thanks, each of you for all the suggestions on schema generating tools. I haven't had a chance to evaluate them all yet but DBVisualizer looks pretty good. In the meanwhile I found SQL Power Architect, which is also free/open source, and can do this kind of diagraming but is not as good as

Re: [GENERAL] Schema tool

2010-11-11 Thread Thomas Kellerer
Aram Fingal wrote on 11.11.2010 22:45: I was thinking of reporting back to this forum with advantages/disadvantages of each tool, as I see it, but realized that I was rapidly getting too far off topic for a list focused specifically on PostgreSQL. I don't think this woul be off-topic here if

[GENERAL] Seeking advice on database replication.

2010-11-11 Thread Demitri Muna
Hello, I am interested in sharing/replicating data between different databases, and I'd like to ask if what I'd like to do is possible in postgresql. I have read a fair amount of documentation and was looking forward to PostgreSQL 9, but I don't think it will do for me what I want. I have an

[GENERAL] Instructions/status of modpglogger

2010-11-11 Thread mabra
Hi All ! I just started with postgres and wish to use a postgres db as a logging destination for my apache webserver in my debian environment [this was not the reason to start with postgres]. Does someone, if this works [thre is no package in my distribution]? Any tips would be great! br++mabra

Re: [GENERAL] Schema tool

2010-11-11 Thread Tom Lane
Thomas Kellerer spam_ea...@gmx.net writes: Aram Fingal wrote on 11.11.2010 22:45: I was thinking of reporting back to this forum with advantages/disadvantages of each tool, as I see it, but realized that I was rapidly getting too far off topic for a list focused specifically on PostgreSQL.

Re: [GENERAL] Seeking advice on database replication.

2010-11-11 Thread Scott Marlowe
On Thu, Nov 11, 2010 at 3:05 PM, Demitri Muna thatsanicehatyouh...@mac.com wrote: Hello, I am interested in sharing/replicating data between different databases, and I'd like to ask if what I'd like to do is possible in postgresql. I have read a fair amount of documentation and was looking

Re: [GENERAL] Instructions/status of modpglogger

2010-11-11 Thread Jeff Ross
On 11/11/10 14:48, ma...@manfbraun.de wrote: Hi All ! I just started with postgres and wish to use a postgres db as a logging destination for my apache webserver in my debian environment [this was not the reason to start with postgres]. Does someone, if this works [thre is no package in my

[GENERAL] Problem using CASTed timestamp column

2010-11-11 Thread Digimer
Hi all, I've got an odd problem. I've got a column with a non-standard date/time in it that I carve up and re-cast as a timestamp. This works fine. Now though, I am trying to check/restrict my results to dates before now() and it's telling me that the columns doesn't exist.

Re: [GENERAL] Problem using CASTed timestamp column

2010-11-11 Thread Richard Broersma
On Thu, Nov 11, 2010 at 2:26 PM, Digimer li...@alteeve.com wrote: http://pastebin.com/sExiBynp  Any hints on what I am doing wrong? You can't refer to a column alias name in the order by clause unfortunately. You'll need to either nest this query in another query to use the alias name, or you

Re: [GENERAL] Schema tool

2010-11-11 Thread Dann Corbit
-Original Message- From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general- ow...@postgresql.org] On Behalf Of Marc Mamin Sent: Thursday, November 11, 2010 11:46 AM To: Aram Fingal; Postgres-General General Subject: Re: [GENERAL] Schema tool Hello, may schemaspy help you

Re: [GENERAL] Considering Solid State Drives

2010-11-11 Thread mark
Search the PG performance mailing list archive. There has been some good posts about SSD drives there related to PG use. -Original Message- From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Allan Kamau Sent: Wednesday, November 10, 2010

[GENERAL] More then 1600 columns?

2010-11-11 Thread Mark Mitchell
Greeting PostgreSQL Gurus. From my goggling I know this has been discussed before and from what I have read it seems that the consensus is you can re-compile postgres and increase the block size from 8k to 16k or 32k to double or quadruple this limit. I re-compiled the latest 9.0.1 source rpm

Re: [GENERAL] More then 1600 columns?

2010-11-11 Thread Tom Lane
Mark Mitchell mmitch...@riccagroup.com writes: Is there are hard limit of 1600 that you cannot get around? Yes. Generally, wanting more than a few dozen columns is a good sign that you need to rethink your schema design. What are you trying to accomplish exactly?

Re: [GENERAL] More then 1600 columns?

2010-11-11 Thread John R Pierce
On 11/11/10 9:24 PM, Tom Lane wrote: Mark Mitchellmmitch...@riccagroup.com writes: Is there are hard limit of 1600 that you cannot get around? Yes. Generally, wanting more than a few dozen columns is a good sign that you need to rethink your schema design. What are you trying to accomplish

Re: [GENERAL] More then 1600 columns?

2010-11-11 Thread Dmitriy Igrishin
Hey Mark, Yeah, I can't imagine an entity in a real project even with more than 100 columns. Its rare case. But if you entities (rows/tuples) of some class (table) can contains variable set of columns (properties) you can look at hstore contrib module. 2010/11/12 John R Pierce

Re: [GENERAL] More then 1600 columns?

2010-11-11 Thread Peter Bex
On Fri, Nov 12, 2010 at 10:17:50AM +0300, Dmitriy Igrishin wrote: Hey Mark, Yeah, I can't imagine an entity in a real project even with more than 100 columns. Its rare case. But if you entities (rows/tuples) of some class (table) can contains variable set of columns (properties) you can

Re: [GENERAL] More then 1600 columns?

2010-11-11 Thread Dmitriy Igrishin
Hey Peter, Unfortunately, there is no indexes on arrays (only on expressions). With hstore we can easily create GiST index for effective access. 2010/11/12 Peter Bex peter@xs4all.nl On Fri, Nov 12, 2010 at 10:17:50AM +0300, Dmitriy Igrishin wrote: Hey Mark, Yeah, I can't imagine an

Re: [GENERAL] More then 1600 columns?

2010-11-11 Thread Peter Bex
On Fri, Nov 12, 2010 at 10:43:14AM +0300, Dmitriy Igrishin wrote: Hey Peter, Unfortunately, there is no indexes on arrays (only on expressions). With hstore we can easily create GiST index for effective access. True. In my project I only ever needed to search on a particular key, and I made