Re: [ADMIN] Index fillfactor changed in pg9?

2011-04-01 Thread Kenneth Marshall
On Fri, Apr 01, 2011 at 01:46:03PM +0100, Glyn Astill wrote:
 Hey Guys,
 
 Just testing some new hardware on 9.0.3 and have restored one of our dumps 
 from 8.4.7.  What I'm seeing is although table sizes are the same, indexes 
 are a lot bigger, approx 50%.
 
 I've done a search and so far can't find anything, but have default 
 fillfactors changed? Or is it something else?
 
 Thanks
 Glyn
 

Given absolutely zero information, are both platforms the same
number of bits? 32-versus 64-bit? The alignment needs for 64-bit
could result in needing more space. Another alternative is that
the order of data insertion caused a bigger index. What happens
if you run a REINDEX on both DB's to the index sizes?

Regards,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] pg_clogs hanging around

2011-03-10 Thread Kenneth Marshall
On Thu, Mar 10, 2011 at 07:01:10AM -0600, Scott Whitney wrote:
 Ooops...I accidentally took this off list, as Kevin was nice enough to point 
 out.
 
 
  What am I looking for? 
 
 Outliers. 
 
  Yeah. It's just those 2. I'd assume that the db I created
  yesterday would be an outlier, but template0 has been there all along
  (of course) and is still listed as 648, a significantly smaller number. 
 
 
  The output shows me 345 rows, most of which are 132x numbers. 
  Two of them (template0 and a database created yesterday) say 648. 
 
 The template0 database is what's keeping the clog files from being 
 cleaned up, but I guess the big question is why you care. They will 
 go away eventually, and shouldn't affect performance. Are they 
 taking enough space to merit extraordinary effort to clean them up? 
  -Kevin 
 
 
 My concern is that when we had a failure a few years ago, and one of the clog 
 files went bad. I had to manually recreate some customer data after bringing 
 up the previous backup. So, I'd rather have them not there, because, well, if 
 there are 200 of them in the dir, there's a higher chance in a case of a 
 crash that one goes bad than if I have 15. 
 
 Would adding -f (full) clean these up? I seem to recall it did in earlier 
 versions. I've added the -F to it already, and that didn't seem to help. 
 

If you have hardware problems like that you have way more problems.
You could have corruption (silent) occurring in any of the other database
files. Good luck.

Cheers,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] pg_clogs hanging around

2011-03-10 Thread Kenneth Marshall
On Thu, Mar 10, 2011 at 07:56:26AM -0600, Scott Whitney wrote:
  
  If you have hardware problems like that you have way more problems.
  You could have corruption (silent) occurring in any of the other
  database files. Good luck.
 
 I am, in fact, aware of that, but every single machine ever manufactured will 
 have hardware problems such at this at some point. It stems quite simply from 
 Ohm's Law, one gross over-simplification of which is as simple as if it's 
 got a resistor in it, it's going to fail at some point, as I'm sure you 
 know. It's merely a matter of whether proactive replacement, backups, standby 
 systems, etc ameliorate that risk. When we had our failure a couple of years 
 ago, it did not.
 
 Regardless, my question still stands, and I do, in fact, care about ANY 
 database blocking cleanup of clogs (or anything else). There's this concept 
 of if this then what else, and if template0 (or anyone else) is blocking 
 that ability to properly clean those up, what else is possibly screwed up in 
 a similar fashion.
 
 So, what can I do to resolve this issue?
 

True, entropy rules. I think that you can use VACUUM FREEZE
to allow the clogs to be cleaned up.

Cheers,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] pg_clogs hanging around

2011-03-10 Thread Kenneth Marshall
On Thu, Mar 10, 2011 at 08:18:34AM -0600, Scott Whitney wrote:
  On Thu, Mar 10, 2011 at 07:56:26AM -0600, Scott Whitney wrote:
   
If you have hardware problems like that you have way more
problems. You could have corruption (silent) occurring in any of
the other
database files. Good luck.
  
   I am, in fact, aware of that, but every single machine ever
   manufactured will have hardware problems such at this at some point.
   It stems quite simply from Ohm's Law, one gross over-simplification
   of which is as simple as if it's got a resistor in it, it's going
   to fail at some point, as I'm sure you know. It's merely a matter
   of whether proactive replacement, backups, standby systems, etc
   ameliorate that risk. When we had our failure a couple of years ago,
   it did not.
  
   Regardless, my question still stands, and I do, in fact, care about
   ANY database blocking cleanup of clogs (or anything else). There's
   this concept of if this then what else, and if template0 (or
   anyone else) is blocking that ability to properly clean those up,
   what else is possibly screwed up in a similar fashion.
  
   So, what can I do to resolve this issue?
  
  
  True, entropy rules. I think that you can use VACUUM FREEZE
  to allow the clogs to be cleaned up.
  
 I thought I was. I've got -F in there. vacuumdb -a -v -F
 

Sorry about that, you are. I have never done it from the command-line
just from psql. I was thinking F for FULL... :)

Cheers,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] How to upgrade PostgreSQL minor releases without a dump/restore?

2011-03-04 Thread Kenneth Marshall
On Thu, Mar 03, 2011 at 07:43:46PM -0800, Uwe Schroeder wrote:
 
   ? Did you read the paragraph above?
   
   Install latest version, and restart postmaster.
  
  Installing of latest version as-is will keep overwriting the existing
  installed directories/files/binaries but not the /usr/local/pgsql/data/
  directory right?  Since this is our production server database, am just
  clarifying this question that came to my mind, before upgrading.
 
 For sake of safety, I'd always make a backup of the data directory. You don't 
 need to dump/restore. Just copy the files someplace else and then do the 
 update. In case something goes wrong you can always go back to your old 
 version and just copy the backup to the data directory.
 
 That's how I do upgrades. Never needed the backup, but better to have one.
 
 Uwe
 
The Just copy the files someplace else can take a long, long
time for a large database and you need to have the database off-line
for the copy to be correct. Not really an option in many environments.

Cheers,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] long running commits

2011-03-03 Thread Kenneth Marshall
On Wed, Mar 02, 2011 at 01:10:37PM -0500, Vaughn, Adam (IMS) wrote:
 Thanks for the suggestions. I made all of the changes you mentioned except 
 for the shared_buffers (which will require a downtime I have set for 
 tonight). I do have another question though, why did you pick 512 MB for the 
 new setting of shared_buffers? Everything I've ever read says that 25% of 
 available RAM is a conservative value for shared_buffers.
 
 Also, we had another one of these instances earlier today. During the 23 
 minute commit a single CPU was at 98% and it looked like all writes were 
 backed up waiting for the commit to finalize. During the time our writing 
 never got above 25 MB/s (far less than we can handle). Is it possible that 
 we're missing an index somewhere or there's something else going on?
 
 Thanks again in advance
 

Hi Adam,

Having recently been benchmarking NFS filesystem performance with
random I/O streams, I think that you do not have the I/O performance
from your filer that you think you do. A 13-disk RAID-DP gives you
at best 11 spindles for random I/O and a single spindle can give a
best 5 MB/sec write throughput for an aggregate throughput of 55MB/sec
if you were the only user of the filer. It is not unreasonable to
assume that you only have 50% of the total throughput available if
it is in use as you stated in your original message. The changes
that have been suggested should allow you to keep the write needs
below your apparent 27MB/sec cap and reduce or eliminate the pauses.

Cheers,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgres service won't start, doesn't log any errors

2010-11-30 Thread Kenneth Marshall
We definitely had a permissions problem. Once that was fixed, per
the documentation, it started right up. You may be able to log
access problems in Windows too, to help you locate the problem.

Good luck,
Ken

On Tue, Nov 30, 2010 at 04:33:08PM +, Lou Picciano wrote:
 Mr./Ms. 100-eti, 
 
 
 Might there be a 'server.log' written to by default in your cluster 
 directory? 
 
 
 Experience says that these things often down to permissions problems... 
 
 
 Lou 
 
 - Original Message - 
 From: 100-eti 100-...@csc-scc.gc.ca 
 To: pgsql-admin@postgresql.org 
 Sent: Tuesday, November 30, 2010 11:29:17 AM 
 Subject: Re: [ADMIN] Postgres service won't start, doesn't log any errors 
 
 
 That's the problem though... I've searched through all the PostgreSQL 
 directories, and there are no logs that indicate anything. 
 
 Next step is to try this on CentOS as it seems the Windows implementation is 
 broken or at least I can't seem to get it going (yet). 
 -- 
 View this message in context: 
 http://postgresql.1045698.n5.nabble.com/Postgres-service-won-t-start-doesn-t-log-any-errors-tp3218164p3286312.html
  
 Sent from the PostgreSQL - admin mailing list archive at Nabble.com. 
 
 -- 
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) 
 To make changes to your subscription: 
 http://www.postgresql.org/mailpref/pgsql-admin 

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Space occupied by Postgres index.

2010-11-30 Thread Kenneth Marshall
On Tue, Nov 30, 2010 at 10:56:53PM +0530, Shridhar Polas wrote:
 Hi,
 
  
 
 I am facing a problem where indexes creates on some tables are occupying
 huge space on disk and it seems to me that this space is not getting
 reclaimed even when there are very few record in an associated table.
 
  
 
 When I ran full vacuum the disk space was reclaimed occupied by tables but
 not by disk space occupied by indexes.
 
  
 
 Can somebody please tell me when disk space occupied by Postgres index is
 reclaimed, without performing re-indexing on those tables?
 
  
 
 Thanks,
 
 Shridhar
 

VACUUM FULL will cause index bloat. You will need to REINDEX to
recover the space. Note, you should not really need to use VACUUM
FULL in a normal correctly configured system.

Cheers,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Space occupied by Postgres index.

2010-11-30 Thread Kenneth Marshall
A regular vacuum will free space reclaimed at the end of a table.
In normal use, your optimum space consumption is that needed to
hold the current table data plus the space needed to hold the
old deleted or modified data until a regular vacuum can make it
available for reuse. In addition, using HOT updates with table
fillfactor  100, can greatly reduce table fragmentation and 
bloating.

Cheers,
Ken

On Tue, Nov 30, 2010 at 11:12:23PM +0530, Shridhar Polas wrote:
 Thanks for a quick response Ken.
 
 One more query:-
 
 If I do not perform VACUUM FULL and REINDEX, does Postgres reclaimed the
 space automatically when number of records in tables reduce after touching
 some limit? I mean the total disk space consumed by Postgres would ever be
 decline at any point without performing VACUUM FULL and REINDEX?
 
 In my test setup I found that the disk space consumed by Postgers is not
 getting declined even after deleting records from tables, if I do not
 perform VACUUM FULL and REINDEX.
 
 Thanks again
 Shridhar
 
 -Original Message-
 From: Kenneth Marshall [mailto:k...@rice.edu] 
 Sent: Tuesday, November 30, 2010 10:59 PM
 To: Shridhar Polas
 Cc: pgsql-admin@postgresql.org
 Subject: Re: [ADMIN] Space occupied by Postgres index.
 
 On Tue, Nov 30, 2010 at 10:56:53PM +0530, Shridhar Polas wrote:
  Hi,
  
   
  
  I am facing a problem where indexes creates on some tables are 
  occupying huge space on disk and it seems to me that this space is not 
  getting reclaimed even when there are very few record in an associated
 table.
  
   
  
  When I ran full vacuum the disk space was reclaimed occupied by tables 
  but not by disk space occupied by indexes.
  
   
  
  Can somebody please tell me when disk space occupied by Postgres index 
  is reclaimed, without performing re-indexing on those tables?
  
   
  
  Thanks,
  
  Shridhar
  
 
 VACUUM FULL will cause index bloat. You will need to REINDEX to recover the
 space. Note, you should not really need to use VACUUM FULL in a normal
 correctly configured system.
 
 Cheers,
 Ken
 
 

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Disk Performance Problem on Large DB

2010-11-05 Thread Kenneth Marshall
Correct, with a single drive and no write cache, once you get more
than 1 I/O running simultaneously, i.e. 1 writing the data and 1
writing each index = 5 I/Os at once, you effectively devolve to
your drives random I/O rate which can be an order of magnitude
slower than its sequential I/O rate. You can use bonnie or some
other disk speed test to get those numbers for your system. When
you do the indexes after the load, each step can use sequential
I/O much more of the time which is why it runs so much faster.

Cheers,
Ken

On Fri, Nov 05, 2010 at 02:19:41PM +1030, Samuel Stearns wrote:
 Your biggest benefit was dropping the indexes before the load, most likely.
 
 -Original Message-
 From: Jonathan Hoover [mailto:jhoo...@yahoo-inc.com] 
 Sent: Friday, 5 November 2010 2:16 PM
 To: Samuel Stearns; pgsql-admin@postgresql.org
 Cc: Kenneth Marshall
 Subject: RE: [ADMIN] Disk Performance Problem on Large DB
 
 I am in the middle of re-creating the indexes now, and what is interesting is 
 how atop is not reporting heavy use of the hard drive now. Instead, I see 
 postgres using 80% of the proc (instead of 8% earlier) and drive usage is 20+ 
 MBr/s and 16+ MBw/s now (instead of .1 and 3.0 respectively earlier). Could 
 it really be the PK causing the delay, or is it really the 
 maintenance_work_mem or simply the idea of creating the indexing after? Good 
 info, hopefully I can do some testing over these ideas over the next few 
 days. For now, I'm hoping I can just get things moving enough.
 
 As I finished this up, I have noticed disk performance is down to 4+ MBw/s 
 and MBr/s, but it is not red in atop any longer, and proc usage now seems 
 to be the limiting factor.
 
 Good stuff...
 Jon
 
 -Original Message-
 From: pgsql-admin-ow...@postgresql.org 
 [mailto:pgsql-admin-ow...@postgresql.org] On Behalf Of Jonathan Hoover
 Sent: Thursday, November 04, 2010 10:29 PM
 To: Samuel Stearns; pgsql-admin@postgresql.org
 Cc: Kenneth Marshall
 Subject: Re: [ADMIN] Disk Performance Problem on Large DB
 
 How does TRUNCATE differ from DELETE FROM table? Sorry, probably an easy 
 RTFM question, but I'll ask anyhow.
 
 -Original Message-
 From: Samuel Stearns [mailto:sstea...@internode.com.au] 
 Sent: Thursday, November 04, 2010 10:27 PM
 To: Jonathan Hoover; pgsql-admin@postgresql.org
 Cc: Kenneth Marshall
 Subject: RE: [ADMIN] Disk Performance Problem on Large DB
 
 TRUNCATE removes all data from the table leaving the schema structure in 
 place.
 
 What helped the most was probably the drop of the indexes.
 
 -Original Message-
 From: pgsql-admin-ow...@postgresql.org 
 [mailto:pgsql-admin-ow...@postgresql.org] On Behalf Of Jonathan Hoover
 Sent: Friday, 5 November 2010 1:53 PM
 To: pgsql-admin@postgresql.org
 Cc: Kenneth Marshall
 Subject: Re: [ADMIN] Disk Performance Problem on Large DB
 
 Just FYI, I removed the PK and the indexes for now. Load times for 1M rows is 
 now 7 SECONDS instead of 7 MINUTES (or just never happening). Granted, I made 
 the changes in #1 below, but WOW! So, question: what helped the most: 1) no 
 PK, 2) no indexes, 3) the maintenance_work_mem being uncommented? I will test 
 myself when I have time, but I'd like to know everyone's thoughts.
 
 Jon
 
 -Original Message-
 From: pgsql-admin-ow...@postgresql.org 
 [mailto:pgsql-admin-ow...@postgresql.org] On Behalf Of Jonathan Hoover
 Sent: Thursday, November 04, 2010 10:03 PM
 To: pgsql-admin@postgresql.org
 Cc: Kenneth Marshall
 Subject: Re: [ADMIN] Disk Performance Problem on Large DB
 
 1. I have now set maintenance_work_mem to 256 MB (which was previously 
 commented by the default config)
 2. The version is PostgreSQL 8.1.18 on x86_64-redhat-linux-gnu, compiled by 
 GCC gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46)
 3. What would be the best and cheapest thing I could for IO performance?
 4. I need to read up on TRUNCATE, which I have not used before. Care to give 
 a quick overview before I RTFM?
 
 Thanks,
 jon
 
 -Original Message-
 From: Kenneth Marshall [mailto:k...@rice.edu] 
 Sent: Thursday, November 04, 2010 4:03 PM
 To: Jonathan Hoover
 Cc: pgsql-admin@postgresql.org
 Subject: Re: [ADMIN] Disk Performance Problem on Large DB
 
 On Thu, Nov 04, 2010 at 01:42:49PM -0700, Jonathan  Hoover wrote:
  Hello,
  
  I have a RHEL 5 box, 4 GB RAM, single hard drive SATA, Intel Core 2 Duo 
  2.4. A basic workstation.
  
  I have a simple database, with one table for now. It has 4 columns:
  
  anid serial primary key unique,
  time timestamp,
  source varchar(5),
  unitid varchar(15),
  guid varchar(32)
  
  There is a btree index on each.
  
  I am loading data 1,000,000 (1M) rows at a time using psql and a COPY 
  command. Once I hit 2M rows, my performance just drops out, and the next 1M 
  never finishes. It takes 7 minutes for 1M rows to load. Once 2M are in 
  there, I've waited an hour, and nothing. It doesn't seem to matter which 1M 
  rows I try to load next, none ever finish. Each 1M

Re: [ADMIN] Disk Performance Problem on Large DB

2010-11-04 Thread Kenneth Marshall
On Thu, Nov 04, 2010 at 01:42:49PM -0700, Jonathan  Hoover wrote:
 Hello,
 
 I have a RHEL 5 box, 4 GB RAM, single hard drive SATA, Intel Core 2 Duo 2.4. 
 A basic workstation.
 
 I have a simple database, with one table for now. It has 4 columns:
 
 anid serial primary key unique,
 time timestamp,
 source varchar(5),
 unitid varchar(15),
 guid varchar(32)
 
 There is a btree index on each.
 
 I am loading data 1,000,000 (1M) rows at a time using psql and a COPY 
 command. Once I hit 2M rows, my performance just drops out, and the next 1M 
 never finishes. It takes 7 minutes for 1M rows to load. Once 2M are in there, 
 I've waited an hour, and nothing. It doesn't seem to matter which 1M rows I 
 try to load next, none ever finish. Each 1M rows is about 70MB on disk in the 
 raw input file.
 
 I have atop installed, and it reports the drives at 100%, which it reports 
 for the first 1M rows too. The MBw/s goes from 20+ on the first 2M rows, down 
 to about 4 MBw/s or less now. The processor usage is at about 2 to 8% at this 
 time (used by postgres).
 
 I have even waited for 1M rows to load, then done a vacuum for no good 
 reason, then even restarted postgresql. I've made sure no disk or proc 
 activity is happening before I start the next 1M rows. None of that seems to 
 matter.
 
 I have a total of about 70M rows to load, but am at a standstill. I've read 
 up on whatever performance docs I can find online, but I am not getting 
 anywhere.
 
 I've increased shared_buffers to 256MB, and I've tried it with fsync 
 commented out as per the default config. I've also tried it with fsync=off. 
 No difference.
 
 Ideas? Thanks in advance,
 Jon

The initial 1M load if the table has just been truncated or created
has no WAL logging. You can boost maintenance_work_mem to increase
index creation/update performance. You are severely I/O limited and
would be better off dropping your indexes during the load and re-
creating them afterwards. If you are starting with an empty table,
truncate it and then load all the data in a single transaction, all
7 COPY commands. Then COMMIT and build the indexes. Your question
is also missing key information like config details, PostgreSQL version,
...

Cheers,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Forced quiesce

2010-10-28 Thread Kenneth Marshall
On Thu, Oct 28, 2010 at 07:39:19AM -0500, Little, Douglas wrote:
 Hi,
 We're having to vacuum full the system tables everyday due to the number of  
 temp table drop/creates.
 We're finding that if anyone tries to execute a job during the maintenance 
 period, it often hangs the db.
 
 Is there a way that we can force the db to quiesce?  Or  lock out users.
 Luckly,  most users are coming in thru a network VIP, so all client IP's are 
 the same.
 
 Thanks in advance.
 
 Doug Little
 

Hi Doug,

You could use pgbouncer to connect to your database and then issue
a PAUSE command before the VACUUM FULL:

PAUSE;

PgBouncer tries to disconnect from all servers, first waiting for all
queries to complete. The command will not return before all queries
are finished. To be used at the time of database restart.

There is also a SUSPEND option, but the PAUSE looks like it will do
what you need.

Regards,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] cache lookup for operator ...

2010-07-13 Thread Kenneth Marshall
On Tue, Jul 13, 2010 at 10:29:42AM -0700, Deron wrote:
 Hello,
 
 I am trying to determine what might cause an issue like this.   This was a
 large (several hundred GB) data warehouse server with PG 8.2.4.   I can't
 really determine what caused this issue.  We ended up restoring the data
 from a backup.
 
 Here are some of the errors in the log:
 
 postmaster.1:Jul 11 14:13:36  postgres[18051]: [8-1] ERROR: cache lookup
 failed for operator 660
 
 postmaster.1:Jul 11 15:27:00  postgres[26428]: [2-1] ERROR: cache lookup
 failed for operator 93
 
 Attempting to list the tables using psql:
 
 #\dt
 
 ERROR:  operator does not exist: oid = oid
 
 LINE 6:  JOIN pg_catalog.pg_roles r ON r.oid = c.relowner
 
  ^
 
 HINT:  No operator matches the given name and argument type(s). You may need
 to add explicit type casts.
 
 Deron

Could it be the result of running an over 3 year old version of
8.2? Try checking the release notes for the last 13 releases and
see if any of the fixes might apply to your situation. I would
certainly upgrade to prevent future problems and data loss. There
are good reasons for all of the point releases for the major
PostgreSQL releases. Maybe someone else can remember details of
a problem such as yours.

Good luck.
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] check_postgres_bloat

2010-06-11 Thread Kenneth Marshall
On Fri, Jun 11, 2010 at 10:50:20AM -0400, dx k9 wrote:
 
 Hi,
 
  
 
 I'm running a the check_postgres.pl --action=bloat on a database and finding 
 that there is wasted space.
 
  
 
 I'm using 95% for the crtical %.  If I use 110% I get the same things, but 
 115% shows everything is OK.
 
  
 
 check_postgres_bloat -H host -p port -db thing -t thing1 -c 95%
 
 check_postgres.pl version 2.14.3
 
 postgres version 8.3.9
 
  
 
  
 
 POSTGRES_BLOAT CRITICAL: service=thing1 (db thing) table public.thing1 
 rows:25537152 pages:212311 shouldbe:187545 (1.1X) wasted size:202883072 (193 
 MB) * (db thing) table public.thing2 rows:10109173 pages:101832 
 shouldbe:94016 (1.1X) wasted size:64028672 (61 MB) * (db thing) table thing3 
 rows:1184946 pages:28701 shouldbe:21434 (1.3X) wasted size:59531264 (56 MB) * 
  
  
 I have run a reindexdb on the table, full vacuum (fze), still I can't get the 
 wasted size to go down on these, so I continue to get critical alerts that my 
 tables are bloated.
  
 check_postgres checks for both index and table bloat.  It looks like my 
 indexes are ok, this is just picking up on table bloat.  I'm not sure what I 
 can do to reclaim the wasted space other than vacuum full  analyze.  Maybe a 
 pgdump will do it.
  
 Thanks for any insight,
 ~DjK
  
Because of the way PostgreSQL performs updates, your database will
perform better with a little bit of extra space over the amount
needed for your data. Unless you have a read-only database, just
use normal vacuum via autovacuum and forget about vacuum full and
trying to get wasted space to zero.

Regards,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Shutdown fails with both 'fast' and 'immediate'

2010-05-12 Thread Kenneth Marshall
On Wed, May 12, 2010 at 10:22:14AM -0400, David Schnur wrote:
 I develop an app that uses a back-end Postgres database, currently 8.3.9.
  The database is started when the app starts up, and stopped when it shuts
 down.  Shutdown uses pg_ctl with -m fast, and waits two minutes for the
 process to complete.  If it doesn't, it tries -m immediate, and waits two
 more minutes before logging an error and giving up.
 
 One user, on OSX 10.5.8, has a script that stops the app each morning, to
 upgrade to the newest build.  In his case, both the fast and immediate
 shutdowns time out, and Postgres continues running for at least 2-4 hours.
  At that point he brings up the terminal to kill all the back-ends manually,
 so we haven't seen it finish shutting down on its own yet.  It is in fact
 shutting down, because all queries fail with the 'database system is
 shutting down' error.
 
 The query running during this time is a DELETE that runs as part of the
 application's daily maintenance.  The size of the DELETE varies, and in his
 case happened to be unusually large one day, which is apparently what
 triggered the problem.  Since the DELETE never gets a chance to finish, the
 problem recurs every morning.
 
 I'll obviously need to deal with that query, but I'm concerned that Postgres
 is unable to interrupt it.  Why might this be happening?  Thanks,
 
 David

In many cases, I/O requests are not interruptable until they complete
and DELETE causes a lot of I/O. Check to see if the processes are in
device-wait, D in top or ps. The solution is to fix the DELETE processing.
One option would be to batch it in smaller numbers of rows which should
allow the quit to squeeze in between one of the batches.

Cheers,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Migrating from 8.3 to 8.4 on the same server

2010-04-14 Thread Kenneth Marshall
On Wed, Apr 14, 2010 at 01:29:23PM -0400, Lewis Kapell wrote:
 In the documentation under the heading Migration Between Releases we 
 read:

 It is also possible to use replication methods, such as Slony, to create 
 a slave server with the updated version of PostgreSQL. The slave can be on 
 the same computer or a different computer. [...] Such a switch-over results 
 in only several seconds of downtime for an upgrade. 

 In the section Warm Standby Servers for High Availability it says:

  For testing purposes, it is possible to run both primary and standby 
 servers on the same system. 

 Although this section does not use the term Point-In-Time Recovery, I 
 understand this is what it refers to.  Could the WAL/PITR method then be 
 used to upgrade from 8.3 to 8.4 on the same server and avoid having the 
 significant downtime of dump/restore?

 If so, I understand the two servers would have to run on different ports.  
 Are there any other issues/traps to be aware of?

 -- 

 Thank you,

 Lewis Kapell
 Computer Operations
 Seton Home Study School

I believe that the server versions must be the same in PITR so
you cannot use it to upgrade.

Cheers,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Can not access PostgreSQL 8.8 via ADO

2010-03-25 Thread Kenneth Marshall
Hi Joserf,

Does your ADO accept a version 8.4 reply from the database. It
may not be supported. In that case, you will need to update your
ADO software.

Cheers,
Ken

On Thu, Mar 25, 2010 at 03:35:10PM +0100, Josef Springer wrote:
 Thanks for quick answer. Sorry, this was a misstype. We are using 8.4 not 
 8.8.

* We are using the OLEDB-Provider for ADO.
* On 8.2 the connectionstring works, on 8,4 it does not work.
* The database is local and accessed with localhost.
* An application, whitch acceses the database via an ODBC-driver
  works. Only ADO does not work. (ADO error *80004005.* )

 Joserf Springer


 Chander Ganesan wrote:
 Josef Springer wrote:
 hello,

 we had installed 8.2 and used a database via ADO with the connectstring
 *Driver={PostgreSQL 
 Unicode};Server=localhost;Uid=OfficeTalk;Pwd=OfficeTalk;Database=OfficeTalk
 *
 Now we have installed 8.8 and try to use the database with the same 
 connectstring but get the ADO error *80004005.*
 There is no PostgreSQL 8.8 - can you tell us what version you are really 
 using (or, if you think it is 8.8, where you got it from?)

 Can you connect to the database with the regular command line tools over 
 the network (or a GUI tool such as PgAdmin III), or do you get an error 
 there as well?

 The problem may be that it's unable to connect due to a database 
 configuration issue, not an ADO issue.  If you had a DSN that was working 
 before, I wouldn't change it until you've verified that you're actually 
 able to connect to the database properly.

 Did you make sure that you made the appropriate changes to postgresql.conf 
 and pg_hba.conf on the new system to ensure that users will be able to 
 connect over the network?




-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] linux standard layout

2010-03-09 Thread Kenneth Marshall
Hi Ben,

If you must use a VMware server for your databases, please run
some pull-the-power-plug tests on your system to ensure that
your data integrity is maintained. Virtual machines can sometimes
cache filesystem updates in the name of performance with disasterous
consequences to your filesystems and databases.

Cheers,
Ken

On Tue, Mar 09, 2010 at 08:18:21AM -0600, Ben Kim wrote:

 Thanks all.

 I cannot change the decision on vmware or layout, but it's great to know 
 that the rpm way is a valid one.

 I appreciate all inputs.



 Regards,

 Ben Kim


 On Mon, 8 Mar 2010, Scott Marlowe wrote:

 On Mon, Mar 8, 2010 at 10:31 PM, Ben Kim b...@tamu.edu wrote:
 Dear list,

 I have about 20 postgresql databases, about 3-4 GB in total.

 We are moving them from Solaris/SPARC to a linux based virtual machine.

 I don't like the VMWare environment, but it's not my choice, and assuming
 the cpu load is ok, will there be any benefits if I put each database on
 separate partitions, vs. simply using the one data directory?

 What reasoning was given for putting your database server in a
 virutalizeed environment?

 Also, how is using standard rpm, with its standard layout 
 (/var/lib/pgsql,
 /usr/lib/pgsql, ...), generally regarded? ( vs. compiling everything ?) 
 Does
 anyone think using the rpm is unprofessional or something that only
 beginners will do?

 I have someone who opposes the use of standard rpms (even yums) for this
 reason. I thought I'd check out how it is received professionally.

 Sounds like a religious argument.  I mostly used packages, unless I
 can't.  (i.e. two different versions on RH at the same time)

 I ask the question because sometimes I feel uneasy mixing rpms and source
 compilation.

 Worry more about accidentally having two different versions of the
 same lib linked to various executables.  It's easy to do with things
 like mysql and apache and php and zlib.

 If I compile something from the source, sometimes I see a boundary 
 condition
 - like, if I already have DBI from a standard rpm, it expects postgresql
 library at a certain location - making me wonder whether I should remove 
 the
 DBI rpm and compile it also from the source, or whether I should use
 standard rpms for postgresql as well. (DBI may not be a good example.)

 In general I didn't have any problems yet with standard rpms and I can 
 make
 the rpms work if there's a problem, but I may be missing something.

 My advice:

 put postgresql on its own, powerful, reliable non-virtualized server.
 Demand that the person who wants to virtualize it justify their
 decision with more than hand-wavy methodologies.  Use packages unless
 you're on RPM and you need  1 version of pgsql.  Even if you need to
 compile some tarball against the packages, it's still easier to
 maintain than to install it all from source.


 -- 
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] linux standard layout

2010-03-09 Thread Kenneth Marshall
On Tue, Mar 09, 2010 at 01:28:20PM -0800, Joshua D. Drake wrote:
 On Tue, 2010-03-09 at 14:25 -0700, Scott Marlowe wrote:
  On Tue, Mar 9, 2010 at 2:06 PM, Joshua D. Drake j...@commandprompt.com 
  wrote:
   On Tue, 2010-03-09 at 13:35 -0700, Scott Marlowe wrote:
  
In a nutshell, I am heartly recommending virtualization.
  
   In a nutshell, you are relying on luck that both heavy iron machines
   can't lose power at the same time.  Sure, it's a low possibility, but
   it's still a real one.
  
  
   Not luck. Percentage of risk.
  
  They're both ways of saying you're rolling the dice.  And in every
  situation we're rolling the dice, it's just a question of how many and
 
 Well my point was all about risk versus reward. For many, a 3% risk is
 more than appropriate. That isn't luck, it is a calculation of risk.
 
True, but in many cases the analysis of risk/reward is flawed by not
including the true cost of a protracted outage. Some of the second
order effects can be nasty if not included originally. I would also
recommend that the analysis and implementation be signed-off at the
highest levels -- that is where the head-hunting will start.

Cheers,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] password authentication failed for user postgres

2010-03-02 Thread Kenneth Marshall
You cannot have a '*' in listen_addresses with anything else.
That already means bind to EVERY available interface. When you
added localhost, it tried to re-bind localhost twice. Try just
using '*'.

Cheers,
Ken

On Tue, Mar 02, 2010 at 07:19:47PM +0530, Major Services wrote:
 pgstartup.log:
 
 The files belonging to this database system will be owned by user
 postgres.
 
 
 
 Success. You can now start the database server using:
 /usr/bin/postmaster -D /var/lib/pgsql/data
 or
 /usr/bin/pg_ctl -D /var/lib/pgsql/data -1 logfile start
 FATAL: lock file postmaster.pidalready exists
 HINT:Is another postmaster (PID 3300) running in data
 directoryb/pgsql/data?
 LOG:  logger shutting down
 runuser: cannot set groups: Operation not permitted
 LOG:  logger shutting down
 FATAL: syntax error in file /var/lib/pgsql/data/postgresql.conf, near
 token ,
 
 
 Now, in the /var/lib/pgsql/data/postgresql.conf the only change I have
 made is in listen_addresses; viz
 listen_addresses = 'localhost', '*'
 
 Let me know how to move ahead.
 Thanks.
 
 On Tue, Mar 2, 2010 at 5:55 PM, I?igo Martinez Lasala 
 imarti...@vectorsf.com wrote:
 
   Ah... ok.
  In /var/lib/pgsql you will find a file called pgstartup.log
  Could you post here last lines of this file ?
 
 
 
  -Original Message-
  *From*: Major Services 
  services.ma...@gmail.commajor%20services%20%3cservices.ma...@gmail.com%3e
  
  *To*: I?igo Martinez Lasala 
  imarti...@vectorsf.com%3d%3fiso-8859-1%3fq%3fi%3df1igo%3f%3d%20martinez%20lasala%20%3cimarti...@vectorsf.com%3e
  
  *Cc*: pgsql-admin@postgresql.org
  *Subject*: Re: [ADMIN] password authentication failed for user postgres
  *Date*: Tue, 2 Mar 2010 16:40:47 +0530
 
  But the server restart has failed; I am yet to make the changes suggested
  by you in the first instance.
 
 
 
 
  On Tue, Mar 2, 2010 at 4:32 PM, I?igo Martinez Lasala 
  imarti...@vectorsf.com wrote:
 
  Try with these parameters in pg_hba.conf. Perhaps your postgres-redhat
  compilation does not have ident enabled...
 
  local all postgres trust
 
 
 
 
  -Original Message-
  *From*: Major Services 
  services.ma...@gmail.commajor%20services%20%3cservices.ma...@gmail.com%3e
  
 
 
   *To*: I?igo Martinez Lasala 
  imarti...@vectorsf.com%3d%3fiso-8859-1%3fq%3fi%3df1igo%3f%3d%20martinez%20lasala%20%3cimarti...@vectorsf.com%3e
  
  *Subject*: Re: [ADMIN] password authentication failed for user postgres
  *Date*: Tue, 2 Mar 2010 16:03:59 +0530
 
  Thank you for your message. But am in another problem now.
  In between I also tried to restart the server with 'service postgresql
  restart'
  It stopped the server, but then *server start failed*.
  Had restarted earlier also, without a problem.
 
  What should I do?
 
  On Tue, Mar 2, 2010 at 2:44 PM, I?igo Martinez Lasala 
  imarti...@vectorsf.com wrote:
   In order to lauch createuser you don't need to modify listen_address
  since
   you can use unix sockets. But anyway it's a requirement if you are goint
  to
   access remotely.
  
   Add this line in first place to your pg_hba.conf
   local   all postgres  ident sameuser
  
  
   Reload postgresql and launch createuser as postgres user.
  
  
   -Original Message-
   From: Major Services services.ma...@gmail.com
   To: pgsql-admin@postgresql.org
   Subject: [ADMIN] password authentication failed for user postgres
   Date: Tue, 2 Mar 2010 14:13:10 +0530
  
   Hi,
  
   1. I have added  listen_addresses = '*' ; in  postgresql.conf
  
   2. And have made the following changes to pg_hba.conf
  
   local   DATABASEUSER   METHOD
   localsamegroup  all  password
   hostDATABASEUSERCIDR-ADDRESS  METHOD  [OPTION]
   host  all all   0.0.0.0/0 md5
   hostssl  all all   0.0.0.0/0md5
  
   3. Have restarted the server
  
   4. I can login to the server as user postgres.
  
   But  while trying to createuser I get the following message  :
   'createuser: could not connect to database postgres:  FATAL :
   password authentication failed for user postgres
  
   This is on Redhat running Plesk also.
  
   I am new to Linux as well as PostgreSQL. Help would be much appreciated.
  
   Thanks
   Viv
  
  
  
 
 
 
 
 
 

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] AIX - Out of Memory

2010-02-16 Thread Kenneth Marshall
Hi Francis,

I did not see my last response so I am trying again.

From some Google-ing:

  If you need more memory (larger data segment) for your
  Perl programs you can set:

/etc/security/limits
default:(or your user)
data = -1   (default is 262144 * 512 byte)

  With the default setting the size is limited to 128MB.
  The -1 removes this limit. If the make test fails please
  change your /etc/security/limits as stated above.

Regards,
Ken

On Tue, Feb 16, 2010 at 04:04:41PM -, Thorne, Francis wrote:
  
 Thanks for the prompt reply, if I'm totally honest I'm not 100% sure the
 postgres install has built correctly into a 64-bit build.  I found it
 really difficult to find any documentation on how to configure Postgres
 correctly for a 64-bit install onto an AIX system.  The only pieces of
 information I found where from this forum and off the back of that I
 configured postgres with the following settings.
 
 BINDIR = /usr/local/pgsql837-64/bin
 
 DOCDIR = /usr/local/pgsql837-64/doc
 
 INCLUDEDIR = /usr/local/pgsql837-64/include
 
 PKGINCLUDEDIR = /usr/local/pgsql837-64/include
 
 INCLUDEDIR-SERVER = /usr/local/pgsql837-64/include/server
 
 LIBDIR = /usr/local/pgsql837-64/lib
 
 PKGLIBDIR = /usr/local/pgsql837-64/lib
 
 LOCALEDIR =
 
 MANDIR = /usr/local/pgsql837-64/man
 
 SHAREDIR = /usr/local/pgsql837-64/share
 
 SYSCONFDIR = /usr/local/pgsql837-64/etc
 
 PGXS = /usr/local/pgsql837-64/lib/pgxs/src/makefiles/pgxs.mk
 
 CONFIGURE = '--prefix=/usr/local/pgsql837-64' '--with-pgport=5422'
 '--enable-thr
 ead-safety' '--enable-integer-datetimes' 'CC=gcc -maix64'
 'LDFLAGS=-Wl,-bbigtoc'
 CC = gcc -maix64
 
 CPPFLAGS =
 
 CFLAGS = -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline
 -Wdeclaration-a
 fter-statement -Wendif-labels -fno-strict-aliasing -fwrapv
 
 CFLAGS_SL =
 
 LDFLAGS = -Wl,-bbigtoc
 -Wl,-blibpath:/usr/local/pgsql837-64/lib:/usr/lib:/lib   
 LDFLAGS_SL = -Wl,-bnoentry -Wl,-H512 -Wl,-bM:SRE
 
 LIBS = -lpgport -lz -lreadline -lld -lm
 
 VERSION = PostgreSQL 8.3.7   
 
 After this I configured the ulimit for the postgres user to the
 following so that it had unlimited memory access 
 
 core file size(blocks, -c) 1048575   
 data seg size (kbytes, -d) 131072
 file size (blocks, -f) unlimited 
 max memory size   (kbytes, -m) unlimited 
 open files(-n) 2000  
 pipe size  (512 bytes, -p) 64
 stack size(kbytes, -s) 32768 
 cpu time (seconds, -t) unlimited 
 max user processes(-u) 2048  
 virtual memory(kbytes, -v) unlimited  
 
 The version of postgres that we are running is 8.3.7.  If you can see an
 obvious step that I have missed out or something I haven't configured
 incorrectly I would be grateful if you could let me know. Our install
 isnt really using temp tables as far as I can see
 
 Thanks again for all your help
 
 Fran
 -Original Message-
 From: Brad Nicholson [mailto:bnich...@ca.afilias.info] 
 Sent: 16 February 2010 15:21
 To: Kenneth Marshall
 Cc: Tom Lane; Thorne, Francis; pgsql-admin@postgresql.org
 Subject: Re: [ADMIN] AIX - Out of Memory
 
 On Mon, 2010-02-15 at 10:18 -0600, Kenneth Marshall wrote:
  On Mon, Feb 15, 2010 at 10:57:06AM -0500, Tom Lane wrote:
   Thorne, Francis thor...@cromwell.co.uk writes:
Looking for some help with regards to an 'Out of Memory' issue I
 have
with our Postgresql install on AIX.  When running large updates or
select queries we get an out of memory error returned and details
entered in the log file like below.  This is a 64-bit install and
 I have
set the ulimit for the postgres user to unlimited.  
   
   The bloat seems to be here:
   
AfterTriggerEvents: 131063808 total in 26 blocks; 576 free (7
chunks); 131063232 used
   
   but it's hard to believe you'd be getting out of memory after only
   130MB in a 64-bit build.  Are you *sure* the postgres executable is
   64-bit?  Are you *sure* the postmaster has been launched with
   nonrestrictive ulimit?  On lots of setups that takes modifying the
   PG startup script, not just fooling with some user's .profile.
   
This is a 64-bit install (8.3) on AIX 5.3
   
   8.3.what?
   
 regards, tom lane
  
  I no longer have an AIX box, but I had similar problems with other
  applications that needed large amounts of memory. Some OS specific
  steps needed to be taken to allow normal users to allocate large
  blocks of memory. The information needed was in their on-line docs
  as I recall, but I do not remember the details. The executables may
  need to be built with specific options/flags to work.
  
 
 AIX has other limits besides the ulimit, there for security purposes I
 believe.  2GB per process is the default.
 
 To OP - what is the size of the postgres process?
 Are you using temp tables heavily or frequently in a given session?
 
 I've seen the same issue

Re: [ADMIN] AIX - Out of Memory

2010-02-15 Thread Kenneth Marshall
On Mon, Feb 15, 2010 at 10:57:06AM -0500, Tom Lane wrote:
 Thorne, Francis thor...@cromwell.co.uk writes:
  Looking for some help with regards to an 'Out of Memory' issue I have
  with our Postgresql install on AIX.  When running large updates or
  select queries we get an out of memory error returned and details
  entered in the log file like below.  This is a 64-bit install and I have
  set the ulimit for the postgres user to unlimited.  
 
 The bloat seems to be here:
 
  AfterTriggerEvents: 131063808 total in 26 blocks; 576 free (7
  chunks); 131063232 used
 
 but it's hard to believe you'd be getting out of memory after only
 130MB in a 64-bit build.  Are you *sure* the postgres executable is
 64-bit?  Are you *sure* the postmaster has been launched with
 nonrestrictive ulimit?  On lots of setups that takes modifying the
 PG startup script, not just fooling with some user's .profile.
 
  This is a 64-bit install (8.3) on AIX 5.3
 
 8.3.what?
 
   regards, tom lane

I no longer have an AIX box, but I had similar problems with other
applications that needed large amounts of memory. Some OS specific
steps needed to be taken to allow normal users to allocate large
blocks of memory. The information needed was in their on-line docs
as I recall, but I do not remember the details. The executables may
need to be built with specific options/flags to work.

Regards,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] how to speed ilike

2010-01-25 Thread Kenneth Marshall
On Mon, Jan 25, 2010 at 05:33:10PM +0200, Julius Tuskenis wrote:
 Thank you for your answer Kevin.
 create index tblname_username_latin on tblname
 ((fnk_latin(username)));

 Tried this, but with no changes
 You might want to have that function force all letters to lowercase.

 Tried that too, but seem to me that ILIKE doesn't use the index. I'm using 
 username ilike '%blablabla%' , so maybe theres no way for ilike to benefit 
 from an index.

You cannot use an index for this search. It will work for 'blah%'
otherwise you need to use full-text indexes a la tsearch.

Cheers,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] select max(parent_table) slow

2009-08-18 Thread Kenneth Marshall
On Tue, Aug 18, 2009 at 06:30:15AM -0700, Anj Adu wrote:
 We are testing 8.4 and noticed that the performance of
 
 select max(indexed_col) from parent_Table
 
 is the same (slow) as postgres 8.1.x ( our current version).The
 child tables are huge and the indexed_col is not the basis for
 partitioning.
 
 I remember that this was an issue from an earlier thread where the
 optimizer was not able to use the child partitions only to get the max
 value. Is this true for 8.4 ?
 
This is still a problem as of 8.4.

Regards,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] select max(parent_table) slow

2009-08-18 Thread Kenneth Marshall
On Tue, Aug 18, 2009 at 08:38:28AM -0500, Plugge, Joe R. wrote:
 What is your constraint_exclusion config set to on your new instance?
 
Even with it set to the default of partition, the query does
not push the max() down to the child tables and does a sequential
scan of the tables and does not use the indexes that are present.
There has been a recent thread on hackers about how to make that
work. It does not yet and requires the application to break the
queries up by the child tables to get reasonable performance.

Regards,
Ken

 -Original Message-
 From: pgsql-admin-ow...@postgresql.org 
 [mailto:pgsql-admin-ow...@postgresql.org] On Behalf Of Anj Adu
 Sent: Tuesday, August 18, 2009 8:30 AM
 To: pgsql-admin@postgresql.org
 Subject: [ADMIN] select max(parent_table) slow
 
 We are testing 8.4 and noticed that the performance of
 
 select max(indexed_col) from parent_Table
 
 is the same (slow) as postgres 8.1.x ( our current version).The
 child tables are huge and the indexed_col is not the basis for
 partitioning.
 
 I remember that this was an issue from an earlier thread where the
 optimizer was not able to use the child partitions only to get the max
 value. Is this true for 8.4 ?
 
 -- 
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin
 
 -- 
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin
 

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Warm standby with 8.1

2009-08-18 Thread Kenneth Marshall
James,

You really, really, really should upgrade to a more recent
release. The list of improvements and bugfixes is long and
well worth having. 8.1 was released 4 years ago. If you
cannot, please, please make certain that you are running
the latest point release -- regardless of what is shipping
with RHEL5.

Good luck,
Ken

On Tue, Aug 18, 2009 at 11:27:40AM -0400, james bardin wrote:
 Hello,
 
 I'm working on a warm standby system, and we would like to stick with
 the RHEL5 distributed version of postgres, which is still 8.1.
 
 I can setup the system to a point where it's adequate for disaster
 recovery, but I'm not comfortable keeping the systems in sync for
 failover, maintenance, or testing. I'm familiar with the newer
 versions, and most documentation for this doesn't mention 8.1 any
 more.
 
 Could someone confirm if this is what I need to do on 8.1?
 - Create backup, and send to standby server.
 - Ship WALs with archive_command
 - When we need to switch over; shutdown, sync pg_xlog, and do a
 recovery on the new server.
 
 If I have to sync pg_xlog and recover anyway to get the latest
 transactions, should I forgo shipping the WAL archives? Switching
 between servers should be rare, but I want something that admins
 without postgres experience can handle.
 
 Should I resort to slony?
 Use the latest version from the postgres yum repo, and vet it locally?
 
 Thanks
 -jim
 
 -- 
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin
 

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Warm standby with 8.1

2009-08-18 Thread Kenneth Marshall
On Tue, Aug 18, 2009 at 12:09:25PM -0400, james bardin wrote:
 Thanks Ken. The more I think about it, the more I feel we should move
 away from upstream, and pull the latest version for this.
 
 
 So on with newer versions -
 When using the built-in systems for warm standby, how do I ensure that
 the latest transactions have been archived? Does a clean shutdown
 flush and archive the WAL within the archive_timeout period?
 
 Thanks
 -jim
 

Hi Jim,

I think that you will be really happy at the improvements in the
8.4 release. Regarding the warm-standby, take a look at the contrib
module pg_standby in the 8.4 release. It includes a good example
implementation for this process.

Regards,
Ken

 
 
 
 On Tue, Aug 18, 2009 at 11:33 AM, Kenneth Marshallk...@rice.edu wrote:
  James,
 
  You really, really, really should upgrade to a more recent
  release. The list of improvements and bugfixes is long and
  well worth having. 8.1 was released 4 years ago. If you
  cannot, please, please make certain that you are running
  the latest point release -- regardless of what is shipping
  with RHEL5.
 
  Good luck,
  Ken
 
 

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] error from postgresql 8.4.0 after a dump/restore from 8.3.5

2009-08-05 Thread Kenneth Marshall
Dear PostgreSQL admins.

I have just restored a DB from 8.3.5 to 8.4.0 and I just
received the error:

ERROR:  column reltriggers does not exist at character 41
STATEMENT:  SELECT relhasindex, relkind, relchecks, reltriggers, relhasrules, 
relhasoids , reltablespace 
FROM pg_catalog.pg_class WHERE oid = '16864'


Does anyone know what is giving that error and how to fix
it? I do see that in 8.4.0 the table has a column named
relhastriggers, while 8.3 has reltriggers. I used pg_dump
from 8.4 to dump the DB but I restored by piping into psql.
Any ideas?

Regards,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] error from postgresql 8.4.0 after a dump/restore from 8.3.5

2009-08-05 Thread Kenneth Marshall
I just thought of something, would running an older release of pgadmin
cause this error?

Cheers,
Ken

On Wed, Aug 05, 2009 at 06:29:07PM -0500, Kenneth Marshall wrote:
 Dear PostgreSQL admins.
 
 I have just restored a DB from 8.3.5 to 8.4.0 and I just
 received the error:
 
 ERROR:  column reltriggers does not exist at character 41
 STATEMENT:  SELECT relhasindex, relkind, relchecks, reltriggers, relhasrules, 
 relhasoids , reltablespace 
 FROM pg_catalog.pg_class WHERE oid = '16864'
 
 
 Does anyone know what is giving that error and how to fix
 it? I do see that in 8.4.0 the table has a column named
 relhastriggers, while 8.3 has reltriggers. I used pg_dump
 from 8.4 to dump the DB but I restored by piping into psql.
 Any ideas?
 
 Regards,
 Ken
 
 -- 
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin
 

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Slony-I Version with Postgres 8.4.0

2009-08-03 Thread Kenneth Marshall
You need to latest release candidate that was announced last week.
The expected actual release is to be this week. I am waiting for
that.

Regards,
Ken

On Mon, Aug 03, 2009 at 04:02:16PM -0500, Plugge, Joe R. wrote:
 What version of Slony-I is ok to use with version 8.4.0, I am getting this 
 error when trying to make against 8.4.0 on RHEL4:
 
 [postg...@linux1558 slony1-1.2.15]$ make
 make[1]: Entering directory `/home/postgres/slony1-1.2.15/src'
 make[2]: Entering directory `/home/postgres/slony1-1.2.15/src/xxid'
 gcc -g -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -I../.. 
 -I/usr/local/pgsql/include/ -I/usr/local/pgsql/include/server/  -c -o xxid.o 
 xxid.c
 xxid.c: In function `_Slony_I_getMinXid':
 xxid.c:236: error: `SerializableSnapshot' undeclared (first use in this 
 function)
 xxid.c:236: error: (Each undeclared identifier is reported only once
 xxid.c:236: error: for each function it appears in.)
 xxid.c: In function `_Slony_I_getMaxXid':
 xxid.c:249: error: `SerializableSnapshot' undeclared (first use in this 
 function)
 make[2]: *** [xxid.o] Error 1
 make[2]: Leaving directory `/home/postgres/slony1-1.2.15/src/xxid'
 make[1]: *** [all] Error 2
 make[1]: Leaving directory `/home/postgres/slony1-1.2.15/src'
 make: *** [all] Error 2
 
 
 -- 
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin
 

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] WAL archiving and backup TAR

2009-06-23 Thread Kenneth Marshall
On Tue, Jun 23, 2009 at 10:18:30PM +0200, Jakov Sosic wrote:
 On Fri, 19 Jun 2009 09:43:28 -0600
 torrez tor...@unavco.org wrote:
 
  Hello,
  I'm implementing WAL archiving and PITR on my production DB.
  I've set up my TAR, WAL archives and pg_xlog all to be store on a  
  separate disk then my DB.
  I'm at the point where i'm running 'Select pg_start_backup('xxx');'.
  
  Here's the command i've run for my tar:
  
  time tar -czf /pbo/podbackuprecovery/tars/pod-backup-$ 
  {CURRDATE}.tar.gz /pbo/pod  /pbo/podbackuprecovery/pitr_logs/backup- 
  tar-log-${CURRDATE}.log 21
  
  The problem is that this tar took just over 25 hours to complete.  I  
  expected this to be a long process because since my DB is about 100  
  gigs.
  But 25hrs seems a bit too long.  Does anyone have any ideas how to
  cut down on this time?
  
  Are there limitations to tar or gzip related to the size i'm working  
  with, or perhaps as a colleague suggested, tar/zip is a single
  thread process and it may be bottlenecking one CPU (we run multiple
  core). When I run top, gzip is running at about 12% of the CPU and
  tar is around .4%.  which adds up to 1/8 of 100% CPU, which number
  wise one full CPU on our server since we have 8.
  
  After making the .conf file configurations I restarted my DB and  
  allowed normal transactions while I do the tar/zip.
  
  Your help is very much appreciated.
 
 Transfer it first and compress later. We have production db of around
 170GB's and backup is around 2h to Tivoli Storage Manager server via
 ethernet (to IBM tape library).
 
 I would not prefer bzip over gzip, because it is less tested, and
 generaly you don't want your backup archive to have even minor sight of
 a possible doubt Production environment maybe, but backup never...
 

+1

The gzip step is holding up the copy the most. Another thing that
might be worth trying is the star program. It can use a shared
memory buffer to allow very rapid archiving.

Cheers,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Replication for migration

2009-05-14 Thread Kenneth Marshall
On Thu, May 14, 2009 at 10:57:26AM -0300, Martin Spinassi wrote:
 Hi list!
 
 I've to do a migration of postgresql from version 8.2 to 8.3 in one
 server. The problem I see here is, to avoid data loss (people still
 using the web site that uses postgres), I'll need to stop postgres 8.2,
 make a pg_dumpall, and then restore that information in 8.3.
 Taking some considerations, like database size, server, etc. I estimate
 that it will be at least 30 minutes of downtime, and that's pretty much
 time.
 
 I've thinking in the possibility of doing some kind of master/slave with
 both versions (I must change the default port in one of them, as
 everything must happen on a single machine), and find some useful links
 like:
 
 http://wiki.postgresql.org/wiki/Replication%2C_Clustering%
 2C_and_Connection_Pooling
 http://momjian.us/main/writings/pgsql/replication.pdf
 
 
 I'd love to read some experiences or advices, since I haven't done it
 before. Also if you have some link o document that you think I must
 read, it'll be welcomed.
 
 
 Thanks to all.
 
 
 Mart??n
 

I used Slony replication to upgrade a database from 8.2 - 8.3 and
it worked quite well. You will need an outage to change the master
to the 8.3 database and re-point your apps, but if scripted, that
is minimal.

Cheers,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] CLUSTER not in multi-command string?

2009-04-29 Thread Kenneth Marshall
On Wed, Apr 29, 2009 at 10:24:51PM +0200, Michael Monnerie wrote:
 On Mittwoch 29 April 2009 Alvaro Herrera wrote:
  Michael Monnerie wrote:
   ERROR:  CLUSTER cannot be executed from a function or multi-command
   string
  
   I got this error from a script which tried:
   psql -U postgres db1 -c 'cluster ; analyze'
  
   In the docs there's no word that it's not possible, and I wonder
   why this is not allowed. Anybody knows? It's stupid, as I want to
   cluster lots of tables, followed by analyze, and now I have to
   psql -U postgres db1 -c 'cluster'
   psql -U postgres db1 -c 'analyze'
 
  Multi-table cluster (i.e. the one where you don't specify one table
  to cluster) cannot be used in a transaction block, function, etc, and
  the FM does mention it:
 
  CLUSTER without any parameter reclusters all the
  previously-clustered tables in the current database that the calling
  user owns, or all such tables if called by a superuser. This form of
  CLUSTER cannot be executed inside a transaction block.
 
 Ah, found this now, in the docs 8.1-8.3. But strange thing is..
 
 [I sent this mail this morning at 07:08 local time but it didn't make it 
 through, without me getting any note:]
 
 I looked further now: Above error was from Postgres 8.3.5 on openSUSE 
 11.1, but it works on Postgres 8.1.11 from openSUSE 10.2. Why did 
 it become worse? According to docs this shouldn't be possible even in 
 8.1. Is there any change I can make a single db call with cluster  
 analyze, like:
 cluster ; begin ; analyze ; commit
 
 mfg zmi

Issue a cluster command for each table individually in the
command stream.

Cheers,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] case insensitive like searches

2009-04-15 Thread Kenneth Marshall
In 8.4 there is a citext contrib module that should do the
trick.

Cheers,
Ken

On Mon, Apr 13, 2009 at 09:59:11AM -0600, Mark Dueck wrote:
 Is there a way to make Postgres do case insensitive like searches?  I
 know you can use ilike instead, but I don't have control of the
 software.  Is there a plan to enable this in the future? 
 
 Can I recompile Postgres to make like searches do the same as ilike, or
 is it easier to create a new character encoding that will be case
 insensitive?
 
 Thanks,
 Mark
 
 -- 
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin
 

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Crash with data corruption under Windows

2009-02-20 Thread Kenneth Marshall
On Fri, Feb 20, 2009 at 03:32:16PM +0100, Nicola Mauri wrote:

 We run into the following issue with Windows 2003 and Postgres 8.2.6 while 
 database was running:

   FATAL:  pg_tblspc/16405/37638 is not a valid data directory
   DETAIL:  File pg_tblspc/16405/37638/PG_VERSION is missing.
   ERROR:  could not open relation 16405/37638/2661: No such file or 
 directory
   ERROR:  could not open relation 16405/37638/2659: No such file or 
 directory
   ERROR:  could not write block 4 of relation 16405/37638/37656: Permission 
 denied
   CONTEXT:  writing block 4 of relation 16405/37638/37656
   ...
   WARNING:  could not write block 4 of 16405/37638/37656
   DETAIL:  Multiple failures --- write error may be permanent.

 This happened 4 times in the last few months! Usually after the crash 
 datafiles appear to be corrupted, but in some other cases they completely 
 disappear from the filesystem (tablespace directory is empty) and we have 
 to recreate the entire db from the last dump.

 No suspicious activities have been detected on the server (unauthorized 
 accesses, anti-virus intervention) and information about disappeared files 
 cannot be found using an undelete utilities. Disk hardware is healthy and 
 no other part of the filesystem seems to be affected by such strange 
 deletions (several applications, including an oracle database, are 
 correctly running on the server).

 Since the problem seems involving only directories containing tablespaces 
 (stored on local partition E:\) we are pointing our attention to Reparse 
 Point and NTFS Junction mechanism.

 Could be there issues in those features?

 Thanks in advance,
 Nicola Mauri

I did not check the release note, but you do realize that you are
6 releases back from the latest stable 8.2 version. Maybe an upgrade
would help.

Cheers,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Tsearch module

2009-02-11 Thread Kenneth Marshall
Full-text indexing is built-in in PostgreSQL 8.3 and higher.
Just follow the instructions in the manual for use.

Cheers,
Ken

On Wed, Feb 11, 2009 at 04:33:07PM -0500, Carol Walter wrote:
 Hello,

 How do I install the tsearch module in 8.4?  I have found documentation on 
 it's use, but it all seems to start with After you install the tsearch 
 module...  It doesn't say how that is done.  Could someone tell me how it 
 is done or point me to the appropriate documentation?

 Thanks very much,
 Carol

 -- 
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] bytea size limit?

2009-01-21 Thread Kenneth Marshall
On Wed, Jan 21, 2009 at 05:21:03AM -0800, paulo matadr wrote:
 I everyone, need help!!!
 My aplication return erro:
 
 2009-01-20 10:22:03,264 INFO  [STDOUT] Caused by: org.postgresql.util.
 PSQLException: ERROR: invalid memory alloc request size 1705447581
 
  TABLE batch.relatorio_gerado
 
   rege_id integer NOT NULL,
   fuin_id integer NOT NULL,
   rela_id integer NOT NULL,
   rege_tmultimaalteracao timestamp without time zone NOT NULL DEFAULT now(),
   rege_nnpaginas integer NOT NULL,
   rege_binario bytea,
  rege_pdf bytea
 
 I get this erro above  refers the column rege_pdf bytea   , when try 
 generate report in pdf.
 
 thanks for help
 

I believe that the default size limit for a bytea or text field is
currently 1GB.

Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] [GENERAL] bytea size limit?

2009-01-21 Thread Kenneth Marshall
On Wed, Jan 21, 2009 at 01:55:28PM +, Grzegorz Ja??kiewicz wrote:
 there's no real limit (its size is described with 32bit number, and
 that's the only limitation here).
 But you need to be aware, that content is sent over at once, so memory
 is the limit in your case.
 
 http://www.postgresql.org/docs/8.3/static/datatype-binary.html
 
 For such large objects, it might be actually better to store them
 separate as files, and just store file name.
 

The TOAST implementation however only allows 30-bits for the
size of the TOAST entry which caps the size at 2^30 or 1GB. I
agree that he could very well be limited also by the memory on
his system.

Cheers,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] [GENERAL] bytea size limit?

2009-01-21 Thread Kenneth Marshall
On Wed, Jan 21, 2009 at 02:09:01PM +, Grzegorz Ja??kiewicz wrote:
 On Wed, Jan 21, 2009 at 2:06 PM, Kenneth Marshall k...@rice.edu wrote:
 
  The TOAST implementation however only allows 30-bits for the
  size of the TOAST entry which caps the size at 2^30 or 1GB. I
  agree that he could very well be limited also by the memory on
  his system.
 
 i wasn't aware of that, and also - it doesn't say anything about it in docs.
 As for limitations, that also depends on db drivers he is using, etc,
 etc. I use bytea to store 100-200MB objects in many dbs, but I
 wouldn't go as far as 1.5GB ...
 
The reference is in:
http://www.postgresql.org/docs/8.3/static/storage-toast.html

Here is the pertinent excerpt:

Only certain data types support TOAST -- there is no need to impose the
overhead on data types that cannot produce large field values. To support
TOAST, a data type must have a variable-length (varlena) representation,
in which the first 32-bit word of any stored value contains the total
length of the value in bytes (including itself). TOAST does not constrain
the rest of the representation. All the C-level functions supporting a
TOAST-able data type must be careful to handle TOASTed input values.
(This is normally done by invoking PG_DETOAST_DATUM before doing anything
with an input value, but in some cases more efficient approaches are possible.)

TOAST usurps two bits of the varlena length word (the high-order bits on
big-endian machines, the low-order bits on little-endian machines),
thereby limiting the logical size of any value of a TOAST-able data type
to 1 GB (230 - 1 bytes). When both bits are zero, the value is an ordinary
un-TOASTed value of the data type, and the remaining bits of the length
word give the total datum size (including length word) in bytes. When the
highest-order or lowest-order bit is set, the value has only a single-byte
header instead of the normal four-byte header, and the remaining bits give
the total datum size (including length byte) in bytes. As a special case,
if the remaining bits are all zero (which would be impossible for a
self-inclusive length), the value is a pointer to out-of-line data stored
in a separate TOAST table. (The size of a TOAST pointer is given in the
second byte of the datum.) Values with single-byte headers aren't aligned
on any particular boundary, either. Lastly, when the highest-order or
lowest-order bit is clear but the adjacent bit is set, the content of the
datum has been compressed and must be decompressed before use. In this
case the remaining bits of the length word give the total size of the
compressed datum, not the original data. Note that compression is also
possible for out-of-line data but the varlena header does not tell whether
it has occurred -- the content of the TOAST pointer tells that, instead.

Cheers,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] [GENERAL] bytea size limit?

2009-01-21 Thread Kenneth Marshall
On Wed, Jan 21, 2009 at 03:07:13PM +, Grzegorz Ja??kiewicz wrote:
 On Wed, Jan 21, 2009 at 3:02 PM, paulo matadr saddon...@yahoo.com.br wrote:
  My system have very large ram size, so its possible review postgresql.conf ?
 
 all depends on how you access DB, what type of drivers (odbc, libpq, etc, 
 etc).
 See, every time you pass a row , they usually have to allocate that
 much memory, not only its quite inefficient, but also slow.
 
 As for the configuration option, I am interested in knowing myself too :)
 
I do not think that the size limit is a runtime option. It is currently
compiled into the server.

Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] tuning tips, speed problem

2009-01-16 Thread Kenneth Marshall
On Fri, Jan 16, 2009 at 04:18:02PM +0100, Francesco Andreozzi wrote:
 Hi all, i send this mesage because i just made a migration from my old 
 mysql db on a more robust postgres server.
 I'm using version 8.1 on a linux debian 4.0
 i have a dual core XEON 2.33GHz and 2Gb of ram ... 2 disk 15k mirrored
 database works good but the performance are horrible! i hope is my fault of 
 settings and this message is just a help request to see whats i missing 
 to set!
 i did only few changes on a default postgresql.conf
 i simply playd with shared buffer... but on a mysql to execute a simple 
 select with an order by the time was jus 1 or 2 seconds ... on this 
 installation the time it's about 10 seconds or more  and if more than 
 one query is executed at the same time the time increase!

 i just add here a show all output to see if somethiong uis wrong ... i hope 
 someone can help me ! i really would like to use postgres on this 
 database!!

 Thankyou
 Francesco

Francesco,

Try increasing your work_memory value and shared_buffers. 8MB is not very
large on a 2GB system. Also, 8.3.5 is much improved over 8.1.

Cheers,
Ken

 shared_buffers  1000
...
 work_mem1024

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Help request: how to tune performance?

2008-09-18 Thread Kenneth Marshall
Hi,

The only other thing to check is what indexes are defined for
your schema. You can look at a previous post about PostgreSQL
indexing for RT to see what we are using here at Rice. Let me
know if you have any questions.

Cheers,
Ken

On Thu, Sep 18, 2008 at 09:00:14PM +0300, Mauri Sahlberg wrote:
 Hi,

 Thanks for the reply and advice.

 Scott Marlowe kirjoitti:
 Version : 8.1.11Vendor: CentOS
 

 So, you built it its own machine, but you didn't upgrade to at least 8.2?

   
 Now it is: 8.4devel_15092008

 The machine was installed by the production team from the standard CentOS 
 template. I tried to adhere to the standard and installed the standard 
 CentOS binary for Postgresql. I am not  part of production team so I try to 
 be extra careful with the rule book.

 Please post the output of explain analyze as an attachment.  explain
 is only half the answer.

   
 I did what Kenneth Marshall suggested and edited DBIx::Searchbuilder's 
 Handle/Pg.pm. I will post the explain analyze for the new query it now 
 generates if it becomes necessary.
 Possibly.  explain analyze will help you identify where stats are
 wrong.  sometimes just cranking the stats target on a few columns and
 re-analyzing gets you a noticeable performance boost.  It's cheap and
 easy.

 When the estimated and actual number of rows are fairly close, then
 look for the slowest thing and see if an index can help.

 What have to already done to tune the install?  shared_buffers,
 work_mem, random_page_cost, effective_cache_size.  Is your db bloating
 during the day?

   
 When I upgraded to 8.4 I also checked newer Postgresql manual for the 
 memory consumption and found comment by Steven Citron-Pousty and increased 
 accordingly:
 - shared_buffers to 320MB
 - wal_buffers to 8MB
 - effective_cache_size to 2048MB
 - maintenance_work_mem to 384MB

 Sorry, I do not understand what you mean by bloating. The db size is:
 rt=# select pg_size_pretty(pg_database_size('rt'));
 pg_size_pretty
 
 350 MB
 (1 row)

 Are you running on a single SATA hard drive?How big's the database
 directory?  I'm guessing from your top output that the db is about 500
 meg or so.  it should all fit in memory.

   
 -bash-3.2$ du --si -s data
 524Mdata

 I don't know what kind of drives there actually are. The machine is vmware 
 virtual with two virtual CPU's clocking 2,33GHz, 4 GB ram, 1 GB swap. The 
 disk is probably given from either MSA or from EVA. The disk shows up as 
 one virtual drive and everything is on it. Filesystem is ext3 on lvm. 
 Database data is on /var which is it's own volume.

 I have also added 5 more mason processes to the web frontend machine.

 For me the results look promising. Opening search builder went from 42 
 seconds to 4 seconds and opening one particular long chain takes now only 
 27 seconds. But again I am not from the support team either so I do not get 
 to define what is fast enough. The verdict is now in for the jury to 
 decide.

 Thank you.


 -- 
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Heavy postgres process

2008-09-17 Thread Kenneth Marshall
Gee,

Going to Oracle does seem a bit like throwing the baby out with
the bath water. For pretty much any use, we found that Oracle requires
many more hardware and management resources than PostgreSQL needs for
the same performance. Make certain that you load test your Oracle upgrade
to ensure that you can meet your service requirements.

On the performance problem, I think that the 83MB is the shared_buffers
for postgres and is shared between all backends. According to the FreeBSD
site, sbwait happens when a thread is trying to send or receive data on
a blocking socket. I would try a couple of sample queries that your app
generates, to time them, but it may be your Apache process that is using
the lion's share of your memory.

Cheers,
Ken

On Wed, Sep 17, 2008 at 11:18:24PM +1000, Vivek_Sharan wrote:
 Yes that's true and that's planned. We will migrate to Oracle. But as of now 
 need some pointers on solving the problem in hand.
 
 Regards,
 Vivek
 
 
 
 -Original Message-
 From: Guido Barosio [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, September 17, 2008 6:39 PM
 To: Vivek_Sharan
 Cc: Scott Marlowe; pgsql-admin@postgresql.org
 Subject: Re: [ADMIN] Heavy postgres process
 
 Well, the answer is shor Vivekt:
 
 Upgrade that postgresql ASAP, it's too way old.
 
 gb.-
 
 On Wed, Sep 17, 2008 at 9:29 AM, Vivek_Sharan [EMAIL PROTECTED] wrote:
  I'm using postgres 7.4.5
 
  Regards,
  Vivek
 
 
 
  -Original Message-
  From: Guido Barosio [mailto:[EMAIL PROTECTED]
  Sent: Tuesday, September 16, 2008 8:08 PM
  To: Vivek_Sharan
  Cc: Scott Marlowe; pgsql-admin@postgresql.org
  Subject: Re: [ADMIN] Heavy postgres process
 
  On Tue, Sep 16, 2008 at 1:41 AM, Vivek_Sharan [EMAIL PROTECTED] wrote:
  Thanks for the information so far
  My Application runs on FreeBSd box and main technological component are 
  Apache and mod Perl, database is postgres. I have already scanned 
  pg_stat_activity and pg_listener table but could get any clue. 
  Pg_stat_activity shows list of all idle processes but command 
  (current_query) column is empty. So I cannot make out what these processes 
  are doing.
  TOP on this server doesn't have any option available to further break down 
  processes. And hitting 'M; did change anything because this is not 
  available with top on this server. Following is the output of top if 
  filtered for only postgres user
 
  *
  last pid: 92308;  load averages:  0.00,  0.03,  0.05
  78 processes:  2 running, 76 sleeping
  CPU states:  1.6% user,  0.0% nice,  3.4% system,  0.0% interrupt, 94.9% 
  idle
  Mem: 413M Active, 2122M Inact, 534M Wired, 140M Cache, 199M Buf, 533M Free
  Swap: 4096M Total, 3880K Used, 4092M Free
 
   PID USERNAME  PRI NICE  SIZERES STATE  C   TIME   WCPUCPU COMMAND
  90976 postgres2   0 83568K 76016K sbwait 2   0:32  2.83%  2.83% 
  postgres
  90963 postgres2   0 83396K 75876K sbwait 2   0:25  1.37%  1.37% 
  postgres
  90919 postgres2   0 83808K 76244K sbwait 1   0:32  0.39%  0.39% 
  postgres
  87341 postgres2   0  6388K   756K select 3   2:35  0.00%  0.00% 
  postgres
  87340 postgres2   0  7200K  1224K select 0   1:41  0.00%  0.00% 
  postgres
  90961 postgres2   0 83580K 76008K sbwait 0   0:30  0.00%  0.00% 
  postgres
  90920 postgres2   0 83636K 76068K sbwait 0   0:29  0.00%  0.00% 
  postgres
  90934 postgres2   0 83664K 76012K sbwait 0   0:27  0.00%  0.00% 
  postgres
  90924 postgres2   0 83408K 75872K sbwait 0   0:25  0.00%  0.00% 
  postgres
  90915 postgres2   0 79292K 72664K sbwait 0   0:23  0.00%  0.00% 
  postgres
  90955 postgres2   0 79644K 73040K sbwait 0   0:22  0.00%  0.00% 
  postgres
  90979 postgres2   0 78904K 72260K sbwait 0   0:17  0.00%  0.00% 
  postgres
  87339 postgres2   0 74756K   672K select 1   0:12  0.00%  0.00% 
  postgres
  90921 postgres2   0 75504K 59848K sbwait 3   0:01  0.00%  0.00% 
  postgres
  90927 postgres2   0 75540K 59296K sbwait 3   0:01  0.00%  0.00% 
  postgres
  90962 postgres2   0 75524K 56960K sbwait 0   0:01  0.00%  0.00% 
  postgres
  90923 postgres2   0 75540K 57584K sbwait 1   0:01  0.00%  0.00% 
  postgres
  90914 postgres2   0 75552K 57776K sbwait 1   0:01  0.00%  0.00% 
  postgres
  90917 postgres2   0 75524K 57256K sbwait 3   0:01  0.00%  0.00% 
  postgres
  90922 postgres2   0 75504K 57352K sbwait 1   0:01  0.00%  0.00% 
  postgres
  90918 postgres2   0 75508K 57748K sbwait 3   0:01  0.00%  0.00% 
  postgres
  90933 postgres2   0 75540K 53728K sbwait 2   0:01  0.00%  0.00% 
  postgres
  90926 postgres2   0 75484K 54928K sbwait 3   0:01  0.00%  0.00% 
  postgres
  90931 postgres2   0 75512K 20880K sbwait 3   0:00  0.00%  0.00% 
  postgres
  90977 postgres2   0 75512K 20584K sbwait 0   0:00  0.00%  0.00% 
  postgres
  91005 postgres2   0 75512K 19956K sbwait 0   0:00  0.00%  0.00% 
  postgres
  90966 postgres

Re: [ADMIN] Help request: how to tune performance?

2008-09-16 Thread Kenneth Marshall
On Tue, Sep 16, 2008 at 11:10:01AM -0600, Scott Marlowe wrote:
 On Tue, Sep 16, 2008 at 2:31 AM, Mauri Sahlberg
 [EMAIL PROTECTED] wrote:
  Hi,
 
  We just upgraded Best Practical's RT from 3.6 to 3.81 and gave the
  database completely own machine. And the users still complain that it is
  dog slow.
 
 Moved up from below:
 
  Version : 8.1.11Vendor: CentOS
 
 So, you built it its own machine, but you didn't upgrade to at least 8.2?
 
 Last place I worked we ran rt 3.6.1 and got a noticeable performance
 boost from switching to 8.2 but the only thing that was ever really
 slow was viewing the rather large approval queue.
 
  :-( I installed pg_top and it seems that at the beginning of
  the ticket display RT-issues a query that eats everything the database
  has. Query is as follows:
 
  SELECT DISTINCT main.* FROM Users main CROSS JOIN ACL ACL_2 JOIN
  Principals Principals_1  ON ( Principals_1.id = main.id ) JOIN
  CachedGroupMembers CachedGroupMembers_3  ON
  ( CachedGroupMembers_3.MemberId = Principals_1.id )  WHERE
  (Principals_1.Disabled = '0') AND (ACL_2.PrincipalId =
  CachedGroupMembers_3.GroupId) AND (Principals_1.id != '1') AND
  (ACL_2.PrincipalType = 'Group') AND (Principals_1.PrincipalType =
  'User') AND (ACL_2.RightName = 'OwnTicket') AND ((ACL_2.ObjectType =
  'RT::Queue' AND ACL_2.ObjectId   = 18) OR (ACL_2.ObjectType =
  'RT::System'))  ORDER BY main.Name ASC
 

What is your version of DBIx-SearchBuilder? In particular, the
DISTINCT definition was changed from:

$$statementref = SELECT DISTINCT main.* FROM $$statementref;

to:

$$statementref = SELECT main.* FROM ( SELECT DISTINCT main.id FROM 
$$statementref ) distinctquery, $table main WHERE (main.id = distinctquery.id) 
;

You can change the line in your version of SearchBuilder and restart
RT with a cache flush to get approximately the same benefit.

Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Time Type

2008-07-28 Thread Kenneth Marshall
What exactly are you hoping for? I do not think that adding two
times makes sense. Maybe you want one to be an interval?

Ken

On Mon, Jul 28, 2008 at 03:16:12PM -0300, Rafael Domiciano wrote:
 Hi there,
 
 I'm trying to do some as following and I couldn't:
 select '00:01:00'::time + '00:10:00'::time
 ERROR:  operator is not unique: time without time zone + time without time
 zone at character 127
 HINT:  Could not choose a best candidate operator. You might need to add
 explicit type casts.
 
 Postgres version: 8.3.3
 
 Anyone knows anything?
 
 Regards,
 
 Rafael Domiciano

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] What is the difference in storage between a blank string and null?

2008-04-11 Thread Kenneth Marshall
On Fri, Apr 11, 2008 at 04:02:36PM -0400, Chris Hoover wrote:
 I'm doing some testing on how to decrease our database size as I work on a
 partitioning scheme.
 
 I have found that if I have the database store all empty strings as nulls, I
 get a significant savings over saving them as blank strings (i.e. '').
 Below is an example of savings I am seeing for the same table:
 
 In my test case, storing empty strings  give me a table size of 20,635,648
 Storing empty strings as nulls gives me a table size of: 5,742,592.
 
 As you can see, storing empty strings as nulls is saving me approximately
 72% on this table.  So, I am wanting to understand what Postgres is doing
 differently with the nulls.  Would someone kindly enlighten me on this.
 
 (P.S. I am using a nullif(trim(column),'') in my partition and view rules to
 store the nulls, and coalesce(column,'') to give my application the data
 back without nulls.)
 
 Thanks,
 
 Chris
 
 PG 8.1
 

PostgreSQL stores NULLs differently. This accounts for your space
difference. If you application can work with NULLs instead of ''
(not the same thing), go for it.

Cheers,
Ken

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PANIC: right sibling

2007-12-04 Thread Kenneth Marshall
You have a corrupt index, run:

REINDEX INDEX tickets1;

Ken

On Tue, Dec 04, 2007 at 11:03:15AM -0600, Scott Whitney wrote:
 I don't really understand the postings I've found on this issue. This has
 been going on for quite some time, but now that I'm regularly vacuuming this
 db, I was looking for any info on how to fix this problem...I don't even
 know enough about it to know what to include in this message.
 
 
 This is the last output of:
 
 vacuumdb -f -z -v rt3
 
 
 
 vacuumdb: vacuuming of database rt3 failed: PANIC:  right sibling is not
 next child in tickets1
 server closed the connection unexpectedly
 This probably means the server terminated abnormally
 before or while processing the request.
 
 
 ---(end of broadcast)---
 TIP 7: You can help support the PostgreSQL project by donating at
 
 http://www.postgresql.org/about/donate
 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] RPATH issue with libpq on Solaris 8 (gcc)

2007-11-05 Thread Kenneth Marshall
On Mon, Nov 05, 2007 at 08:07:04AM +0100, Peter Eisentraut wrote:
 Am Samstag, 3. November 2007 schrieb Tom Lane:
   That also seems to have the positive effect of getting libpq.so to find
   the shared objects that it depends on.  So is the fact that I need to
   edit src/Makefile.global after I run configure mean that I found a bug?
   ;-)
 
  Seems more like a feature request: there should be a way to override
  rpathdir from a configure switch.  We already have an enable-rpath
  switch, but it's just a boolean.  Maybe allow
  configure --enable-rpath=PATH
  ?
 
 Well, the rpath option affects the rpath to our own libraries.  To find the 
 system libraries, the environment variable LD_RUN_PATH can be used.  
 Otherwise you'd have to convince every software product in existence to add 
 the appropriate configuration options.
 
 -- 
 Peter Eisentraut
 http://developer.postgresql.org/~petere/
 

We use LD_RUN_PATH in our Solaris builds for that reason.

Ken

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] How to update from 8.1 to 8.2 ?

2007-10-15 Thread Kenneth Marshall
Use Slony to replicate to the new version from the old version.
Once they are sync-ed up, you switch the new version to be the
master and re-point your apps to the new DB.

Ken

On Mon, Oct 15, 2007 at 04:52:43PM +0400, Alexander Kuprijanov wrote:
 Hello
 
 Does exist another way to make upgrade (without dump-restore)?
 
 I have this question, because I have very big DB (or very poor server): dump 
 ~40min, restore 2h
 
 I can't stop my DB so long...
 
 
 
 Thanks
 
 
 ?? ??  Thursday 20 September 2007 19:58:24 Milen A. Radev 
 ??(??):
  Josef Springer :
   Where is the documentation of the steps dto do for updating e.g. from 8.1
   to 8.2 ? I have never done this and could not find any docs for.
 
  [...]
 
  http://www.postgresql.org/docs/8.2/static/install-upgrading.html
 
 
 
 -- 
 Se okulo ne atentas, dorso eksentas
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] tar, but not gnu tar

2007-08-22 Thread Kenneth Marshall
Tena,

We have been very happy with star. It is a very nice pax,
cpio, gnutar,... replacement. You may want to give it a try.

Ken

On Tue, Aug 21, 2007 at 06:53:53PM -0700, Tena Sakai wrote:
 Hi Kevin,
 
 Yes, I have, but I am much more familiar with tar.
 I think I will go with the latest gnu tar (v 1.18)
 which is suggested by Bruce.  I will play with it
 tomorrow and see how it goes.
 
 Thanks.
 
 Tena
 
 [EMAIL PROTECTED]
 
 
 -Original Message-
 From: Kevin Grittner [mailto:[EMAIL PROTECTED]
 Sent: Tue 8/21/2007 6:02 PM
 To: Tena Sakai; pgsql-admin@postgresql.org
 Subject: Re: [ADMIN] tar, but not gnu tar
  
  On Tue, Aug 21, 2007 at  7:28 PM, in message
 [EMAIL PROTECTED], Tena Sakai
 [EMAIL PROTECTED] wrote: 
  
  On my linux machine, gnu tar is the tar.  Does
  anybody have a suggestion as to where I can go
  to get a tar that is not gnu?
  
 Have you considered using cpio instead?
  
 http://www.gnu.org/software/cpio/
  
 -Kevin
  
 
 
 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] replicating postgresql database to ms-sql database

2007-07-06 Thread Kenneth Marshall
Mary,

I believe that you can use FreeTDS with dblink to set up your
on replication process. I agree with you that having both databases
be PostgreSQL would be preferrable.

Ken

On Fri, Jul 06, 2007 at 10:48:48AM -0700, Mary Anderson wrote:
 Hi list,
I have been asked if it is possible to asynchronously replicate a 
 postgresql database to ms-sql.  My answer is yes, provided postgresql 
 ORM features aren't used in the design of the database.  One does an 
 ascii dump of the tables and then bulk loads them into MSSQL.  The 
 database in question will be about 10G.  Is there a sane way to do this?
 I am hoping the slony can be instructed to produce an ascii transaction 
 log which can then be massaged and applied to the MS-SQL database. 
 (This is a database of population statistics which is relatively static.
 It will be read only on the ms-sql node)
 
 I think it is much better to have both databases running postgres, 
 by the way.
 
 mary anderson
 
 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly
 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] postgresql recovery (PANIC: btree_split_redo: lost left sibling)

2007-05-31 Thread Kenneth Marshall
You may want to give the latest version of 8.0 a try 8.0.13.

Ken

On Thu, May 31, 2007 at 06:14:05PM +0200, Sipos Ferenc wrote:
 Hello,
 
 I have a problem with postgresql recovery. I want to make a test
 recovery from our production server to a test machine. On the production
 server runs PostgreSQL 8.0.3. On the test machine the PostgreSQL version
 is 8.0.4 (installed from binary rpm).

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] select count(*) from a large table is very slow

2007-05-18 Thread Kenneth Marshall
Make a summary table and select from there instead.

Ken

On Fri, May 18, 2007 at 06:15:25AM -0700, Jessica Richard wrote:
 I have a ~30 milliow row table, select count(*) from this table is very very 
 very slow...
 
 How dow I get it run faster?
 
 Thanks,
 Jessica
 
  
 -
 We won't tell. Get more on shows you hate to love
 (and love to hate): Yahoo! TV's Guilty Pleasures list.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] Postgres Backup Questions

2006-06-03 Thread Kenneth Marshall
John,

We accomplished this using an inline filter to capture the
checksum and size for the entire dump to tape. Then we re-read
the tape to see if the tape generated the same checksum and size.
That is probably about a good as you are going to get.

Ken

On Thu, Jun 01, 2006 at 01:57:07PM -0400, John R. Allgood wrote:
 Hey Again
 
Thanks for the input. but what I am wanting to check for is CRC 
 errors and block errors. The backups are written to tape and I would 
 like a way to verify the data on the tape as compared to the data on the 
 server. I may be way off here but I just want to be sure I am getting 
 reliable backups.
 
 Thanks
 
 Scott Marlowe wrote:
 On Thu, 2006-06-01 at 10:15, John R. Allgood wrote:
   
 Hello
 
 I have a question regarding backups on Postgresql 7.4.7-2. We are 
 currently using a Progress database and will be moving over to Postgres 
 this month. In Progress there is a way to verify your backups does 
 Postgres have anything similiar or is there other ways to accompplish 
 this. I am currently writing backup and other utility scripts and would 
 like to incorporate this into my program.
 
 
 I wrapped my backup in a bash script, and that seems to catch errors. 
 something like:
 
 if !(
/data01/pg/bin/pg_dump dbname  dbname.sql
 ); 
 then
 cat backup failed|mail -s backup failed [EMAIL PROTECTED];
 fi
 
 To test this, I corrupted a table in a test database to force a hard
 failure, and it sent an email.  Don't know if it's a comprehensive test,
 but it seems to work for us.
 
 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
   
 
 -- 
 All I can think of is a platter of organic PRUNE CRISPS being trampled
 by an army of swarthy, Italian LOUNGE SINGERS ...
 
 
 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings
 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly