Re: [ADMIN] trouble restoring data from postgres 8.3.3 to freshly installed 8.3.6

2009-02-22 Thread Scott Marlowe
On Sun, Feb 22, 2009 at 2:32 AM, Tena Sakai tsa...@gallo.ucsf.edu wrote:
 Hi,

 I am able to revert back to 8.3.3 system.  Maybe
 I can use 8.3.3 psql to read the dumpall output.
 How can I direct the output to data directory of
 8.3.6 postgres?

You don't have to dump and restore for a minor point update.  Just
stop your 8.3.3 and start 8.3.6 in the same data directory.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Tuning postgres for fast restore?

2009-02-22 Thread Scott Marlowe
On Sat, Feb 21, 2009 at 12:14 AM, Guillaume Lelarge
guilla...@lelarge.info wrote:
 Tino Schwarze a écrit :
 [...]
 I'm going to pg_restore a database dump of about 220 GiB (uncompressed,
 but most data is BLOBs). The machine has 8 GiB of memory and 8 cores.
 Is there any advice to speed up restoring, postgresql.conf-wise?

 I already have a script which does the data loading and index creation
 in parallel. I'm looking for advice regarding shared_mem, work_mem and
 maintenance_mem - shall I raise them?


 You should definitely raise shared_buffers and maintenance_work_mem.

Also, you can disable fsync during a restore if the machine's not
handling any other databases.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] very, very slow performance

2009-02-21 Thread Scott Marlowe
On Fri, Feb 20, 2009 at 10:45 PM, Uwe C. Schroeder u...@oss4u.com wrote:

 On Friday 20 February 2009, Tena Sakai wrote:
 Hi Scott,

  What does explain and (it'll take a while to get
  it) explain analyze select ... have to say?

 ---
-- Hash Join  (cost=165264.65..55486119.31 rows=601095277 width=32)
 Hash Cond: (genotype.allele1id = a1.alleleid)
  -  Hash Join  (cost=82632.33..34731274.54 rows=601095277 width=34)
Hash Cond: (genotype.allele2id = a2.alleleid)
-  Seq Scan on genotype  (cost=0.00..13976429.77 rows=601095277
 width=36) -  Hash  (cost=42474.59..42474.59 rows=2447659 width=6) -  Seq
 Scan on allele a2  (cost=0.00..42474.59 rows=2447659 width=6) -  Hash
 (cost=42474.59..42474.59 rows=2447659 width=6)
-  Seq Scan on allele a1  (cost=0.00..42474.59 rows=2447659
 width=6) (9 rows)

I was wrong about this query, it is constrained by the where clause.
I much prefer join on syntax as it's more obvious what's joining to
what.  Pgsql is smart enough to reorder join clauses as long as it's
not contrained in by say, a left join, and even then there's some
wiggle room I think.  Anyway...

 The above tells you that you don't have indices in place. Postgres chooses a
 seq scan - which as the name implies scans all the rows in sequencial order.

Well, he's grabbing everything, so I'm betting an index won't buy you
anything unless everything fits in memory and you set random_page_cost
low enough and shared_buffers and effective_cache high enough, then an
index will lose.

However, if you always access the tables in a given order, you can
cluster tables and get really fast results.  I'd try clustering on an
index for each sub table, clustering on that, and adding order bys to
put the result sets into matching clustered index fields for each
joined table.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] 8.3.5 broken after power fail SOLVED

2009-02-21 Thread Scott Marlowe
On Sat, Feb 21, 2009 at 1:43 AM, Michael Monnerie
michael.monne...@is.it-management.at wrote:
 I managed to recover the data that was still readable. About 650
 messageblock entries got lost. What makes me nervous a bit is that
 postgres kept running despite (partially) being destroyed. It should
 really have shutdown itself after the first problem was found. That
 database is for mails, and I would understand to have lost some from the
 time before the power loss, but I even lost entries from *after* the
 crash. That means the error happened after the system was up again and
 happily did it's work, but lost entries to a table. Could there be a
 better way to check the db at crash recovery startup time?

 If someone is interested, I have a full postgres log with every single
 command done to the database.

We preach this again and again.  PostgreSQL can only survive a power
outage type failure ONLY if the hardware / OS / filesystem don't lie
about fsync.  If they do, all bets are off, and this kind of failure
means you should really failover to another machine or restore a
backup.

It's why you have to do possibly destructive tests to see if your
server stands at least some chance of surviving this kind of failure,
log shipping for recovery, and / or replication of another form (slony
etc...) to have a reliable server.

The recommendations for recovery of data are just that, recovery
oriented.  They can't fix a broken database at that point.  You need
to take it offline after this kind of failure if you can't trust your
hardware.

Usually when it finds something wrong it just won't start up.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] 8.3.5 broken after power fail

2009-02-21 Thread Scott Marlowe
On Sat, Feb 21, 2009 at 1:23 AM, Michael Monnerie
michael.monne...@is.it-management.at wrote:

 Also a question: Because I must read all data, the psql client runs out
 of memory, trying to cache all the 10GB from that table. I circumvented
 this with selecting only parts of the table all the time. Is there a
 smart way to do such a select without caching the results in memory? Is
 that what temporary tables and select into are made for? I just want
 to know the recommended way for doing huge queries.

You can dump individual tables with pg_dump -t table1 -t table2.  That
should work without running out of memory.  And yeah, temp tables and
select into are a good way to get your data ready to be pg_dumped.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] very, very slow performance

2009-02-21 Thread Scott Marlowe
On Sat, Feb 21, 2009 at 3:11 AM, Tena Sakai tsa...@gallo.ucsf.edu wrote:
 Hi Scott,

 Thanks for clustering suggestion.  I have never used
 cluster command and I need to read up before I can
 use it.

 I have adjusted postgres parameters per your recommen-
 dation.  Work_mem is now 8GB, checkpoint_segments is
 raised to 100.  Shared_buffers is still at 1GB because
 this is bound by SHMMAX of kernal, which I cannot alter
 for maybe a few days.  I am thinking of setting SHMMAX
 to 8GB.

Yeah, I'd reverse those two and set work_mem to something in the 512M
range max.  keep in mind work_mem is per query / per sort.  4 queries
with3 sorts each = max allocation of work_mem*3*4...

 I may or may not be able to move up to 8.3.6

 over this weekend.  If not, sometime during the week.

definitely a good idea.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Crash with data corruption under Windows

2009-02-20 Thread Scott Marlowe
On Fri, Feb 20, 2009 at 7:32 AM, Nicola Mauri nicola.ma...@saga.it wrote:

 We run into the following issue with Windows 2003 and Postgres 8.2.6 while
 database was running:

  FATAL:  pg_tblspc/16405/37638 is not a valid data directory
  DETAIL:  File pg_tblspc/16405/37638/PG_VERSION is missing.
  ERROR:  could not open relation 16405/37638/2661: No such file or directory
  ERROR:  could not open relation 16405/37638/2659: No such file or directory
  ERROR:  could not write block 4 of relation 16405/37638/37656: Permission
 denied

Usually when I see the permission denied thing there's anti-virus
software hard locking pgsql files in the middle of the day.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] vacuum full...

2009-02-19 Thread Scott Marlowe
On Thu, Feb 19, 2009 at 9:35 AM, Jessica Richard rjes...@yahoo.com wrote:
 I am running vacuum full via a shell script for a list of large databases
 now... and I may run out of my scheduled system down time

 If I don't finish all databases and kill the script in the middle... am I
 going to cause any table corruptions since vacuum full is rebuilding the
 tables and indexes

No.  But may I ask why you are running vacuum full?  Is it a regular
thing.  Do all your tables in all your dbs actually need it?

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] vacuum full...

2009-02-19 Thread Scott Marlowe
On Thu, Feb 19, 2009 at 11:01 AM, Rafael Domiciano
rafael.domici...@gmail.com wrote:
 I used to run vacuum full in one of my bases, but now i'm not running
 anymore vacuum full, just vacuum analyze in the past 1 month, but the number
 of necessary pages is increasing every day, now it's in 311264... there is
 any problem this get increasing?
 When I runned Reindex few days ago, this number get not decreased.
 I now that vacuum full reorders the relation, but it's the only thing that
 it does?

Same question I had for the other poster, why are you not running
autovacuum?  Has it proven itself not up to the task?

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] very, very slow performance

2009-02-19 Thread Scott Marlowe
On Thu, Feb 19, 2009 at 10:02 PM, Tena Sakai tsa...@gallo.ucsf.edu wrote:
 Hi Everybody,

 I am running postgres v8.3.3 on redhat linux (del hardware)
 with 4 cpu's.  This machine is terribly bogged down and I
 would like a bit of help as to what can be done.

 For last maybe 18+/- hours, there are 24 queries happening.
 What's odd is that 21 of them are identical queries.  This
 happens once in a while (maybe one per month, plus/minus)
 because of the data I need to process.  Basically, I fire
 up an application for each data (there are 21 of them) and
 the application gets data out of postgres and transforms
 matrices and put them into the form the subsequent processes
 can handle.  I know it sounds dumb, but that's how it is for
 a foreseeable future.

 Here's the query (please read them in fixed-size font, if
 you can):

  select subjectid, genotype.markerid, a1.value as allele1,
 a2.value as allele2, genotype.dateCreated,
 genotype.dateReplaced, genotype.ignore,
 genotype.inconsistent
from genotype, allele a1, allele a2
   where
 allele1id = a1.alleleid
 and
 allele2id = a2.alleleid;

 Genotype table mentioned above has about 600,000,000+ rows.  As
 I mentioned, there are 20 more of them running concurrently.
 3 other jobs look like:

So, you're purposely creating a cross product of 600M rows? How big is
the allele table?  What does explain and (it'll take a while to get
it) explain analyze select ... have to say?

If you're doing the cross product on purpose then it's gonna chew up a
lot of memory.  I'd suggest raising work_mem to a gig or so and
running only as many of these queries at a time as the machine can
handle.  Running in them in parallel of 21 is gonna bog down and be
swapping / spilling to disk all over the place.

If you don't mean to have a cross product then add in the where clause
part / join on syntax to remove the cross product.
 The total memory the machine has is 32 mb and nearly 100%
 of it is consumed.  Swap is twice as large as physical memory,
 but very little is used.  The load average of the machine when
 I am in this fix is somewhere between 25 and 27.

I'll assume that's 32G, which is a pretty common size for db servers
nowadays.  How much is consumed means little, given the tendency of
the kernel to cache.  I'm assuming you're on linux / unix since you
didn't say otherwise, and big windows servers are a rarity right now
for pgsql.  It's good swap isn't used, shows there's no pressure on
the memory subsystem.  I assume you mean this is WHILE the queries are
running.

If you HAVE to run that many queries at once on a machine like this,
then you need a big honking RAID array, or a small one at least.
Generally you want as many mirror sets as you'll have parallel queries
running, preferably double with this kind of sequential scan heavy
load.  But you'll still be limited by memory when running these
queries at the same time.

 Each postgres process consumes so little cpu time.  The copy
 takes maybe 4% (+/-); the rest of them use somewhere between
 0.3% and 0.7%.  As to memory, the copy takes 3% to 4% and the
 rest takes something like 1.7%.

What does vmstat 10 300 say while the queries are running?  We're
looking for high wait percentage.  If you've got that you're limited
by the speed of your drives.

 In terms of postgres configuration:
   max_connections = 100
   shared_buffers = 1024MB
   temp_buffers = 128MB
   max_fsm_pages = 153600
   vacuum_cost_delay = 0
   checkpoint_segments = 3
   checkpoint_timeout = 5min
   checkpoint_warning = 30s

 I don't think I am doing anything wild...  Am I?

Actually, you're doing something mild.  I'd test 2,4, 6, and 8 gig of
shared_buffers.  I'd increase work_mem at least for the user running
the biggest queries, those cross products up earlier in the post.
Also, since you seem to have some large updates, I'd increase the
checkpoint segments to something in the 20 to 100 range.

 Oh, one more thing, I said that there are 24 queries/jobs
 happening, but there are a bunch of them that says IDLE
 or IDLE in transaction --according to pg_stat_activity
 view.

Oh, then maybe you don't have that many.  idle connections are ok,
they use up little.  Unless you've got a hundred or so don't worry.
Idle in transaction, OTOH, is bad.  It basically holds a lock on
reusing old rows in the db and can cause bloat.  Generally it's an app
/ app translation error that needs fixing.  leaving a transaction open
for very long is a bad thing.

I think there were some planner fixes from 8.3.3 to 8.3.5 btw.  Those
might be important.  An update might solve your problems.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] 8.3.5 broken after power fail

2009-02-18 Thread Scott Marlowe
Look into zero damaged pages setting.

On Wed, Feb 18, 2009 at 2:35 PM, Michael Monnerie
michael.monne...@is.it-management.at wrote:
 I have the impression I'm asking the wrong things. Still I try.
 Could I do something about this?

 psql:/tmp/x:119: ERROR:  invalid page header in block 973698 of relation
 pg_toast_1281127

 It seems that toast table got truncated after host+XEN crash and XFS
 recovery. Would adding some zeroes to that file help? If yes, how could I
 find out what the file is named?

 mfg zmi



 --
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin




-- 
When fascism comes to America, it will be the intolerant selling it as
diversity.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] 8.3.5 broken after power fail

2009-02-17 Thread Scott Marlowe
2009/2/17 Achilleas Mantzios ach...@matrix.gatewaynet.com:
 Στις Tuesday 17 February 2009 10:54:52 ο/η Michael Monnerie έγραψε:

 * your daughter with 3.5 years switching off the power supply of the
 server
 1st line of defense is to set your system to shutdown normally when the power 
 button is pressed.
 2nd line of defense is to get your self a decent UPS unit

 My daughter does this all the time on our family FreeBSD box. No probs.

 Also at work at more than 20 tanker vessels running 7.4.2, the captains do 
 that on a constant
 basis and PgSQL always has survived (more than the rest of the system 
 anyways..)

Those are all good to have.  But no UPS is a replacement for hard
drives / RAID controllers / file systems that don't lie about fsync.

Nothing makes your database shine like being the only one in the
hosting center that survives sudden catastrophic power failure.

 What can I do?

tether your daughter to the other side of the room?  I'm not sure
which parts of those mount options are dangerous or not.  I use ext3
stock with noatime.  And a battery backed RAID.  Smaller slower work
group / station controllers (i.e. 5 year old server conrollers) go for
pretty cheap and give pretty good performance with 2 or 4 drives.

AS for fixing it, I believe the answer involves creating a clog file
full of zeros 16Meg or so, and pg_reset_xlog.  Don't count on all your
data being there or all your FK-PK type relationships to be correct,
Immediately dump it, initdb and reload your data, fixing it as you go.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Downtime

2009-02-16 Thread Scott Marlowe
On Mon, Feb 16, 2009 at 12:52 PM, Rafael Domiciano
rafael.domici...@gmail.com wrote:
 Hello for all,

 Our bussiness do not tolerate anymore downtime (the downtime is estimate at
 10 hour / year - that involves upgrade of hardware, crash of hardware), and
 using Slony-I 1.2.15 I have waste time to put the slave node on production.
 So I'm studying some replications softwares, like pgpool, pg_cluster,
 pg_replicator.

What exactly was not good enough about slony?  We use it for automated
failover which is initiated by the application if it detects the
master go down.  Works pretty well.  Our downtime for unscheduled
maintenance is well under an hour a year with this setup.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Vacuum wait time problem

2009-02-13 Thread Scott Marlowe
On Fri, Feb 13, 2009 at 10:20 AM, Roger Ging rg...@musicreports.com wrote:
 Hi,

 I'm running vacuum full analyze verbose on a table with 20million rows and
 11 indexes.  In top, I'm seeing [pdflush] and postgres: writer process each
 using diferent cpu cores, with wait time well above 90% on each of them.
  The vacuum has been running for several hours, and the last thing to show
 on screen, over an hour ago, was :

 DETAIL:  8577281 index row versions were removed.
 736 index pages have been deleted, 736 are currently reusable.
 CPU 7.57s/52.52u sec elapsed 381.70 sec.

 That's the last index

 The vacuum process itself is using less than 2% of a core.
 The pg version is 8.3.1 running on Suse.  Hardware is 2X dual core Opterons,
 16 GB RAM, 24 drives in RAID 50

 It would seem to me that the system is extremely IO bound, but I don't know
 how to find out what specifically is wrong here.  Any advice greatly
 appreciated.

A couple of questions.
Why Vacuum full as opposed to vacuum (regular)?
Why 8.3.1 which has known bugs, instead of 8.3.latest?
What do vmstat 10 and iostat -x 10 have to say about your drive
arrays while this vacuum is running?

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Vacuum wait time problem

2009-02-13 Thread Scott Marlowe
On Fri, Feb 13, 2009 at 2:56 PM, Roger Ging rg...@musicreports.com wrote:
 Scott,

 I can only answer a couple of the questions at the moment.  I had to kill
 the vacuum full and do a regular vacuum, so I can't get the iostat and
 vmstat outputs right now.  This message is the reason I was trying to run
 vacuum full:

 INFO:  license: found 257 removable, 20265895 nonremovable row versions in
 1088061 pages
 DETAIL:  0 dead row versions cannot be removed yet.
 There were 18434951 unused item pointers.
 687274 pages contain useful free space.
 0 pages are entirely empty.
 CPU 38.15s/37.02u sec elapsed 621.19 sec.
 WARNING:  relation licensing.license contains more than max_fsm_pages
 pages with useful free space
 HINT:  Consider using VACUUM FULL on this relation or increasing the
 configuration parameter max_fsm_pages.

Yeah, that's pretty bad.  ~2 Million live rows and ~18 Million dead
ones is a pretty badly bloated table.

Vacuum full is one way to reclaim that lost space.  You can also dump
and restore that one table, inside a drop / create restraints in a
transaction during maintenance if you can.  A Vacuum full is quite
intrusive, so avoid it during normal working hours.  Dumping and
reloading the whole db may be faster than either a vacuum full or a
cluster.A common trick is to do something like:

begin;
select * into ordermydata from bigbloatedtable order by some_field;
delete * from bigbloatedtable;
insert into bigbloatedtable select * from ordermydata;
commit;

This will both put your table in some order which might help, and
remove the bloat.

 A clean restore of the database to another server create a size on disk of
 about 244GB.  This server was at over 400GB yesterday, and now, after
 aggressive vacuuming by hand, is down to 350GB.  It had gotten so bad that
 the backup was not finished when I got in yesterday, almost 8 hours after it
 started.

Sounds like either autovacuum isn't running right, or it was turned
off and no decent vacuum schedule was set up by the dba.  Or the FSM
is too small, but judging by how bloated this table is, I'd guess
vacuuming got turned off.

 The machine has been under heavy load 24/7 for a couple of months, so I have
 not been able to upgrade versions.  I am taking it offline this weekend and
 will install the latest.  I'll try to re-create the scenario I had going on
 yesterday over the weekend and get some io statistics.

Definitely look into the check_postgresql.pl script for nagios or
something similar to keep track of bloated tables.  Every friday I log
into my production servers and fire off a few of the queries from that
script to check for bloat and make sure everything's running fine.
Another useful trick is to email yourself a copy of the last 20 or 30
lines of vacuum verbose for the whole db every so often (once a week)
and pay attention to your fsm usage.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Vacuum wait time problem

2009-02-13 Thread Scott Marlowe
On Fri, Feb 13, 2009 at 2:56 PM, Roger Ging rg...@musicreports.com wrote:

Oh yeah, also, any chance of testing your RAID array in RAID-10 some
day?  RAID5 anything tends to be pretty slow at writes, especially
random ones, and RAID-10 may give you a lot more bandwidth where you
need it, on the write side of the equation.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Vacuum wait time problem

2009-02-13 Thread Scott Marlowe
On Fri, Feb 13, 2009 at 3:34 PM, Tino Schwarze postgre...@tisc.de wrote:
 On Fri, Feb 13, 2009 at 03:13:28PM -0700, Scott Marlowe wrote:

 [...]

 Yeah, that's pretty bad.  ~2 Million live rows and ~18 Million dead
 ones is a pretty badly bloated table.

 Vacuum full is one way to reclaim that lost space.  You can also dump
 and restore that one table, inside a drop / create restraints in a
 transaction during maintenance if you can.  A Vacuum full is quite
 intrusive, so avoid it during normal working hours.  Dumping and
 reloading the whole db may be faster than either a vacuum full or a
 cluster.A common trick is to do something like:

 begin;
 select * into ordermydata from bigbloatedtable order by some_field;
 delete * from bigbloatedtable;
 insert into bigbloatedtable select * from ordermydata;
 commit;

 This will both put your table in some order which might help, and
 remove the bloat.

 Really? Wouldn't that add even more bloat? How does that work? (I'd
 expect a drop table/create table instead of the delete...)
 Note: I suppose that you know a lot more about PG than I do, so I'm just
 curious.

Whoops, meant
truncate bigbloatedtable;
sheesh, long week.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Vacuum wait time problem

2009-02-13 Thread Scott Marlowe
On Fri, Feb 13, 2009 at 5:02 PM, Michael Monnerie
michael.monne...@is.it-management.at wrote:
 On Freitag 13 Februar 2009 Roger Ging wrote:
 I'm running vacuum full analyze verbose on a table with 20million
 rows and 11 indexes.  In top, I'm seeing [pdflush] and postgres:
 writer process each using diferent cpu cores, with wait time well
 above 90% on each of them.  The vacuum has been running for several
 hours

 Roger, I've had the same issue some time ago and wondered why it was so
 slow. I did iostat -kx 5 555 and saw that I/O was also quite low.

 vacuum_cost_delay = 0
 That was the trick for me. It was set to 250(ms), where it took 5 hours
 for a vacuum to run. Now it takes 5-15 minutes.

Wow!!!  250 ms is HUGE in the scheme of vacuum cost delay.  even 10ms
is usually plenty to slow down vacuum enough to keep it out of your
way and double to quadruple your vacuum times.

250 is like taking a nap every 5 feet while running a mile.  :)

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Vacuum wait time problem

2009-02-13 Thread Scott Marlowe
On Fri, Feb 13, 2009 at 7:02 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Scott Marlowe scott.marl...@gmail.com writes:
 On Fri, Feb 13, 2009 at 5:02 PM, Michael Monnerie
 michael.monne...@is.it-management.at wrote:
 vacuum_cost_delay = 0
 That was the trick for me. It was set to 250(ms), where it took 5 hours
 for a vacuum to run. Now it takes 5-15 minutes.

 Wow!!!  250 ms is HUGE in the scheme of vacuum cost delay.  even 10ms
 is usually plenty to slow down vacuum enough to keep it out of your
 way and double to quadruple your vacuum times.

 I wonder whether we ought to tighten the allowed range of
 vacuum_cost_delay.  The upper limit is 1000ms at the moment;
 but that's clearly much higher than is useful, and it seems
 to encourage people to pick silly values ...

I agree.  I can't imagine using a number over 50 or so.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Vacuum wait time problem

2009-02-13 Thread Scott Marlowe
On Fri, Feb 13, 2009 at 7:24 PM, Scott Marlowe scott.marl...@gmail.com wrote:
 On Fri, Feb 13, 2009 at 7:02 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Scott Marlowe scott.marl...@gmail.com writes:
 On Fri, Feb 13, 2009 at 5:02 PM, Michael Monnerie
 michael.monne...@is.it-management.at wrote:
 vacuum_cost_delay = 0
 That was the trick for me. It was set to 250(ms), where it took 5 hours
 for a vacuum to run. Now it takes 5-15 minutes.

 Wow!!!  250 ms is HUGE in the scheme of vacuum cost delay.  even 10ms
 is usually plenty to slow down vacuum enough to keep it out of your
 way and double to quadruple your vacuum times.

 I wonder whether we ought to tighten the allowed range of
 vacuum_cost_delay.  The upper limit is 1000ms at the moment;
 but that's clearly much higher than is useful, and it seems
 to encourage people to pick silly values ...

 I agree.  I can't imagine using a number over 50 or so.

Although I'd probably just emit a log warning for anything over that
saying that values over 50 will result in very very long vacuum times.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] User defined functions... - Found to be tsearch

2009-02-12 Thread Scott Marlowe
On Wed, Feb 11, 2009 at 8:11 AM, Carol Walter walt...@indiana.edu wrote:
 Hello,
 It seems these functions are not user defined functions, but tsearch
 functions.  Now, I need to know about tsearch2.  I've searched my system and
 I don't find the functions that are being called from this database.  I'm
 still a newby with postgres.  Should I see these tables and these functions
 with in my users database or are they supposed to be global.  If they are
 supposed to be available only within the specific users database, how do I
 make them available to other databases.  I'm trying to make an exact copy of
 this database.  The original database is failing the sanity check.  I can
 copy the data out, but not the schema.  I manually created a duplicate
 schema to copy the data back into, but the copy is failing on these two
 table because it doesn't find the functions.  I'm not even sure I have the
 tsearch module for this version of postgres.  It's 8.2.4.  I did an upgrade
 to 8.2.11 and I see a tsearch directory under contrib, but there doesn't
 seem to be one anywhere else, where I would expect it to be for the 8.2.4
 version.

tsearch is included from 8.3 onwards.  8.2 does not have it built it.
You'd have to go into the contrib directory and build it (if you built
pgsql from source) and then install it from there.  But upgrading from
8.2 with tsearch added on to 8.3 is a bit of a pain, so I'd strongly
suggest just going to 8.3 if you can.  If 8.2.4 is missing a contrib
dir, and you installed from packages it's likely you didn't install
the 8.2.4-contrib package.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Totally inappropriate by-line for a major representative of this organization

2009-02-03 Thread Scott Marlowe
On Tue, Feb 3, 2009 at 12:43 AM, A B gentosa...@gmail.com wrote:
 Here is mine and yes I have gotten a lot of flack over it. What is the
 problem with trying to promote your faith and why do so many people have
 problems with it?

 I have not looked into exactly what you claim to believe, but
 generally: the problem is that the thing you are promoting includes
 very offensive stuff, women abuse, killing gays, slavery and a lot of
 other awful stuffIt's like admiring Stalin for stopping Hitler but
 refusing to see the awful stuff he did.
 I'd think it would be a major step forward if religions would take a
 close look on what they are selling and edit their holy books.

You do realize you're painting all Christians with a large brush, and
demonizing them, right?  The same method used to dehumanize people and
justify their poor treatment?  I'm pretty sure most of my Christian
friends aren't into abusing women, taking slaves, and killing all the
gheys.

 Mysql is the best and postgresql sucks. There are no problems with
 mysql but postgresql sucks.

Then please stop using PostgreSQL and start using MySQL.  If you need
help unsubscribing, just go here:
http://www.postgresql.org/community/lists/subscribe

If Bruce were on a Wiccan chat list, and his signature looked like it
does, it would be much less offensive than the one you just used.
Why?  Because his has no negative or insulting tone.  Yours does.  See
the difference?

-- 
When fascism comes to America, it will be the intolerant selling
fascism as diversity.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Totally inappropriate by-line for a major representative of this organization

2009-02-03 Thread Scott Marlowe
Actually, the ACLU protects CIVIL LIBERTIES, like say, freedom of
speech.  I think they'd be on Bruce's side here.

On Tue, Feb 3, 2009 at 7:07 PM, Benjamin Krajmalnik k...@illumen.com wrote:
 He must be a card carrying member of the ACLU, and now he thinks he is
 the G_d of these lists :)

 -Original Message-
 From: pgsql-admin-ow...@postgresql.org
 [mailto:pgsql-admin-ow...@postgresql.org] On Behalf Of Joshua D. Drake
 Sent: Monday, February 02, 2009 4:22 PM
 To: Scott Runnion
 Cc: pgsql-admin@postgresql.org
 Subject: Re: [ADMIN] Totally inappropriate by-line for a
 major representative of this organization

 On Mon, 2009-02-02 at 16:57 -0500, Scott Runnion wrote:


  Regardless, please terminate all further communications
 from all postgresql and EnterpriseDB mail lists until such
 time as this situation can be corrected.
 

 Well others have already responded but let me just boil it
 down for you.

 No.

 Joshua D. Drake

 --
 PostgreSQL - XMPP: jdr...@jabber.postgresql.org
Consulting, Development, Support, Training
503-667-4564 - http://www.commandprompt.com/
The PostgreSQL Company, serving since 1997


 --
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin


 --
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin




-- 
When fascism comes to America, it will be the intolerant selling it as
diversity.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Warm standby recovery failure

2009-02-03 Thread Scott Marlowe
On Fri, Jan 30, 2009 at 7:52 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Jaume Sabater jsaba...@gmail.com writes:
 On Fri, Jan 30, 2009 at 2:22 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 We probably should add a caution about this to the manual's discussion
 of how to write archiving scripts.

 I presume you mean the copy/transfer process did not do its job
 correctly, Tom. Therefore, I would advise using a script that compares
 the MD5/SHA1 sums of the origin and destination files and retries the
 copy/transfer process before returning an exit status to the server
 (archive_command call). Do you think this would suffice?

 That seems like using a sledgehammer to swat a fly.  The problem as
 I saw it was that the script was probably simply not checking for
 an error result from 'cp'.

Plus, if you're using a system like rsync it uses checksums to make
sure the data transferred is correct if you use the -c (think that's
the one) switch.
-- 
When fascism comes to America, it will be the intolerant selling it as
diversity.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Totally inappropriate by-line for a major representative of this organization

2009-02-02 Thread Scott Marlowe
Really?  Really?  I'm a Wiccan and I genuinely like Bruce and
appreciate all the work he does for this list.  That line is HIS
testimony, it's his truth, and I'm quite happy seeing it go by each
time he writes a message.  My girlfriend is Christian (and a few other
things) and I've been to her church, where the pastor welcomed me with
open arms, even knowing I was not Christian.

Leave the guy alone.

On Mon, Feb 2, 2009 at 2:57 PM, Scott Runnion srunn...@mac.com wrote:
 As a subscriber to many postgresql mailing lists, I received one today in
 which Bruce Momjian was respondind to a query from Stefano Nichele regarding
 the last_autovacuum field. As always, Bruce's response was cogent and
 undoubtedly accurate. What astounded and dismayed me, however, was his
 signature byline, which reads:

 -  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +


 Surely, given the multi-cultural/multi-national/multi-theological
 users/supporters/advocates of the postgresql and EnterpriseDB developments
 and distributions, the inappropriateness of this declaration by Mr. Momjian
 _as a representative of EnterpriseDB_ is inexcusable, unless EnterpriseDB
 itself is positing itself as a secularly-biased organization.
 Regardless, please terminate all further communications from all postgresql
 and EnterpriseDB mail lists until such time as this situation can be
 corrected.

 Scott Runnion
 srunn...@mac.com


 --
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin




-- 
When fascism comes to America, it will be draped in a flag and
carrying a cross - Sinclair Lewis

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Totally inappropriate by-line for a major representative of this organization

2009-02-02 Thread Scott Marlowe
Just FYI, searching the mailing list archives, I get 1000+ hits for
Bruce Momjian and the word thanks in the history of the mailing
lists.  My name gets 782 or so.  Scott Runnion doesn't seem to appear
at all.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgres 8.3.x upgrade

2009-01-22 Thread Scott Marlowe
On Thu, Jan 22, 2009 at 7:50 AM, Jessica Richard rjes...@yahoo.com wrote:
 We are going to upgrade our Postgres servers (most 8.2.9 and some 8.2.4) to
 the 8.3.x version.

 From the postgres web site, I can see the 8.3.5 is the latest version.

 Question for people who have been on 8.3.5:

 Is 8.3.5 very safe to use (no major new bugs)? Is it really better than the
 older versions of  the 8.3.X families?

We've been running it in production since last fall and 8.3.x is a
very stable and very fast branch.  We had about ten or so queries that
failed due to bad casting, that we fixed in about a day.  Upgrading
postgres, even counting these small issues, is one of the least
painful upgrades I've ever done.

As for 8.3.x with x being 0 through 5 right now, the newer versions
are bug and security fixes only over 8.3.0.  You want those bug fixes,
and running an older update version (i.e. 8.3.1) versus the latest
(i.e. 8.3.5) is rarely a good idea.

Unlike some database products which are known to ship later versions
that break things (**cough** MySQL **cough**) the postgresql hackers
are VERY picky about what goes into a point update.

**cough** (see http://bugs.mysql.com/bug.php?id=31001 )

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] tuning tips, speed problem

2009-01-19 Thread Scott Marlowe
On Mon, Jan 19, 2009 at 5:35 AM, BRAHMA PRAKASH TIWARI
prakashr...@gmail.com wrote:
 Hi Francesco

 most probabely this is due to the auto vacuum option on and if auto vacuum
 is on then the in condition of high transaction on database it slows the
 speed of the hole database.Set it off in postgres.conf and vacuum and
 reindex transactional tables manualy with in  every two million
 transactions.
 like
 vacuum full table name;
 reindex table table name;

Except for certain circumstances vacuum full should be avoided and
vacuum (regular) used in its place until it is proven ineffective.

Also, it is usually far better to turn up the
autovacuum_vacuum_cost_delay to 10 or 20 and let autovacuum do its
job.  Unless you have a very starved I/O subsystem autovacuum with
cost delay of 20 should have almost no noticeable effect on a
transactional database.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] rerunning ./configure

2009-01-16 Thread Scott Marlowe
On Fri, Jan 16, 2009 at 11:32 AM, Carol Walter walt...@indiana.edu wrote:
 Hello,

 If I need to rerun configure what else do I have to do.  Do I have to run
 gmake uninstall? Do I have to run gmake distclean?  I've got a running
 version of 8.3.4 (on Solaris 10) but I have to reconfigure so postgres will
 know where the openssl bits are.

Due to some possibility of cached configure output, I always run make
distclean first and start over.  Back in the day of 386-dx40s when
running ./configure took 30 minutes and building postgresql took
another 1.5 hours, I would work harder to avoid doing that, but
nowadays, given how fast a clean build can run, it's no great loss.

I keep a copy of all the switches I use in a file called
configure.local or something like that that I can put into svn and
check out when I need to build an exact copy.  On the rare occasions a
packaged pgsql isn't good enough.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Strange deadlock error last night

2009-01-13 Thread Scott Marlowe
On Tue, Jan 13, 2009 at 10:37 AM, Scott Whitney swhit...@journyx.com wrote:

 It ended up locking up about 250 customer databases until I restarted the
 postmaster. This is version 8.1.4. Upgrading right now (even to a minor rev)
 is not really an option. This box has been up and running for 306 days. This
 postgres level has been installed for..err...well...at least Aug 9, 2006,
 based on some dates in the directories.

You need to ask yourself how much downtime you can afford.  The 2 or 3
minutes every few months to go from 8.1.x to 8.1.x+1, or the half a
day of downtime when some horrendous bug takes down the whole site
because you didn't update it.  Seriously, that unfozen template0 bug
that Alvarro mentioned is one of those kinds of bugs.

Nothing like your db going down in the middle of the day with an error
message that it's going down to prevent txid wraparound induced loss,
please run vacuum on all your databases in single user mode.

If you can't find set aside a minute or two at 0200 hrs, then don't be
surprised when you get one of those failures.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PGOPTIONS and default_tablespace

2009-01-12 Thread Scott Marlowe
On Mon, Jan 12, 2009 at 2:34 AM,  jan-peter.seif...@gmx.de wrote:
 Hello,

 without success I tried to set the default tablespace via the environment 
 variable PGOPTIONS.

 My tries:
 set PGOPTIONS=default_tablespace='pgdata'
 set PGOPTIONS=default_tablespace='pgdata'
 set PGOPTIONS=default_tablespace(pgdata)
 set PGOPTIONS=default_tablespace('pgdata')

 When I try to connect to the server via psql I get the error message: FATAL: 
 invalid command-line arguments for server process. HINT: Try postgres 
 --help for more information.

I do this one of two ways usually.  Either I set the user / database
to have a default tablespace permanently via alter user / database, or
I put it at the top of my script I'm running.  PGOPTIONS isn't really
the place I think of setting it myself.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] error vacuum full

2009-01-05 Thread Scott Marlowe
On Mon, Jan 5, 2009 at 6:17 AM, Ygor Degani ygordeg...@gmail.com wrote:
 I use postgres-8.3.5. My database has 168 GB in size. Whenever i try to do
 vacuum full happens the following error:

 WARNING:  terminating connection because of crash of another server process
 DETAIL:  The postmaster has commanded this server process to roll back the
 current transaction and exit, because another server process exited
 abnormally and possibly corrupted shared memory.
 HINT:  In a moment you should be able to reconnect to the database and
 repeat your command.
 vacuumdb: vacuuming of database acotel_brazil_vas failed: server closed
 the connection unexpectedly
 This probably means the server terminated abnormally
 before or while processing the request.

 If i have autovacuum enabled, Do i need to do vacuum full?

You have two problems.  One is you're running vacuum full regularly,
which you shouldn't need, and which can cause bloated indexes.  Normal
autovacuum should handle your database fine unless you're deleting a
huge part of a table over and over.

As to why it's crashing, that's another story.  Anything in the system
logs about the oom killer kicking in?  Have you got work_mem set to
something huge?  maintenance work mem really huge?  shared_buffers,
etc?  What have you changed from stock in postgresql.conf?

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] data convert

2009-01-04 Thread Scott Marlowe
What's cybercluster?

On Sun, Jan 4, 2009 at 8:07 PM, Jumping quzhengp...@gmail.com wrote:
 i update my database from 8.1.9 to cyberclusterr-1.2.0:
 1:dump out my database from 8.1.9
  the  size of sql file is about 1.3G
  the database initdb use  /opt/postgresql-8.1.9/bin/initdb -D
 /opt/postgresql-8.1.9/data --encoding=latin1 --lc-ctype=C
 2:dump into cyberclusterr-1.2.0 on node1   ( i have two nodes:node1 and node2)
 the database initdb use /opt/cyberclusterr-1.2.0/bin/initdb -D
 /opt/cyberclusterr-1.2.0/data --encoding=UTF-8

 the problem is : the node1 and the node2 could not sync on some tables.
 Any ideas or hints ?

 Best Regards.
 Jumping
 --
 Don't tell me how many enemies we have, but where they are!
 (ADV:Perl -- It's like Java, only it lets you deliver on time and under 
 budget.)

 --
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin




-- 
When fascism comes to America, it will be draped in a flag and
carrying a cross - Sinclair Lewis

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] data convert

2009-01-04 Thread Scott Marlowe
Sorry, I was being subtle.  I looked it up and it looks interesting.
However, I'm betting there's a mailing list for this software that's
likely to have more people to answer your question than the -admin
list.

I'll have to look it up.

On Sun, Jan 4, 2009 at 9:48 PM, Jumping quzhengp...@gmail.com wrote:
 In many cases asynchronous replication is just not enough to model a
 certain business case. Therefore Cybertec Schönig  Schönig GmbH
 offers a synchronous multimaster replication solution for PostgreSQL
 called Cybercluster.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] access data in php

2009-01-02 Thread Scott Marlowe
On Fri, Jan 2, 2009 at 10:11 AM, Marc Fromm marc.fr...@wwu.edu wrote:
 If I gather the sql results with this code
 $results = pg_query($dbconn,$query);

 I can check if there is no returned data with this code
 $rows = pg_fetch_assoc($result);

 but if I then use a while loop to display data (if there is data returned)
 with this code
 while ($row = pg_fetch_array($result)){ . . . }

 I have to execute this code a second time before the while loop
 $results = pg_query($dbconn,$query);

 If I do not execute the $results line a second time the while loop does not
 work properly.

 Why is $results loosing its value when it hits the while loop?

It shouldn't be.  Got a complete, short sample that does this?

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] access data in php

2009-01-02 Thread Scott Marlowe
On Fri, Jan 2, 2009 at 11:09 AM,  iog...@free.fr wrote:
 pg_fetch_assoc behave like pg_fetch_array: it increments the internal
 pointer to the current result.
 So if you call it once, then pg_fetch_array will return the 2nd result in
 the result set.

Wow, I'm so used to seeing

$rows = pg_num_rows() that that's what I saw up there.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] access data in php

2009-01-02 Thread Scott Marlowe
On Fri, Jan 2, 2009 at 12:40 PM, Marc Fromm marc.fr...@wwu.edu wrote:
 This is my code:
 ?php
 $dbconn = pg_connect(host=localhost port=5432 user=postgres 
 dbname=studentalerts);

 if(isset($_GET[value])){
$w_number=$_GET[value];
 }

You need to scrub user input.  use pg_escape_string($_GET['value'])

 //echo $w_number;

 $query = select first_name, last_name, alert from alert_list where 
 w_number='$w_number';
 $result = pg_query($dbconn,$query);
 if (!$result) {
echo Problem with query  . $query . br/;
echo pg_last_error();
exit();
 }

 $rows = pg_fetch_assoc($result);

Change this to

$rows = pg_num_rows($result);

 if ($rows==0){
echo There are no alerts for $w_number!\n\n;
 }else{
$result = pg_query($dbconn,$query);
$count=1;
while ($row = pg_fetch_array($result)){
echo Alert $count: ;
echo htmlspecialchars($row['first_name']) .  ;
echo htmlspecialchars($row['last_name']);
echo \n;
echo htmlspecialchars($row['alert']);
echo \n\n;
$count++;
}
 }
 if ($w_number==){echo Enter a W number!\n\n;}
 echo End of line;

 pg_free_result($result);
 pg_close($dbconn);
 ?

 -Original Message-
 From: Scott Marlowe [mailto:scott.marl...@gmail.com]
 Sent: Friday, January 02, 2009 10:28 AM
 To: iog...@free.fr
 Cc: Marc Fromm; pgsql-admin@postgresql.org
 Subject: Re: [ADMIN] access data in php

 On Fri, Jan 2, 2009 at 11:09 AM,  iog...@free.fr wrote:
 pg_fetch_assoc behave like pg_fetch_array: it increments the internal
 pointer to the current result.
 So if you call it once, then pg_fetch_array will return the 2nd result
 in the result set.

 Wow, I'm so used to seeing

 $rows = pg_num_rows() that that's what I saw up there.




-- 
When fascism comes to America, it will be draped in a flag and
carrying a cross - Sinclair Lewis

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] postgres block_size problem

2008-12-30 Thread Scott Marlowe
On Tue, Dec 30, 2008 at 5:28 AM, Bhujbal, Santosh
sbhuj...@starentnetworks.com wrote:
 Hi All,

 We were getting the following error on executing a COPY command on
 postgres8.2.0.

First and foremost, you need to run the latest version of 8.2, not
8.2.0.  .0 releases of postgresql tend to have the most bugs.  You're
missing two years of updates by running 8.2.0.  Who knows what bugs
you may be running into by running 8.2.0

While you're at it, you might want to look at upgrading to 8.3.5,
which has quite a few enhancements over 8.2.x  But it's not that big
of a deal.  8.2 is a good performer overall.

 SQL execution failed, Internal error. SQL Statement: (COPY command failed
 with error: ERROR:  row is too big: size 8200, maximum size 8136)

Hmmm.  What exactly are you trying to insert into what?  Normally when
you see something like this it's an index on a text field that causes
this type of problem.  But I'm not sure what's going on here.
Normally postgresql will move large things out of line into a toast
table.  Are you running a non-standard storage parameter?

 So, in order to fix the error we tried increasing the block size to 16k and
 recompiled the postgres library.

If you can get away with NOT using 8k blocks, do so.  The code base
is no where near as well tested with block sizes over 8k as it is with
8k blocks, the default.

 The problem at hand was resolved with the
 new postgres library and we were now able to populate data for the mentioned
 query successfully. However, this experiment fails when we tried to run the
 new postgres on data directory generated with 8k block size which is likely
 to occur during an upgrade with postgres log showing the following error:

Yeah, running a non-standard block size is only advised if you're
willing to go to the extra effort each time to build a new package by
hand.  And require that of any users who use your application to do
the same, or run a custom package you provide.

 What could be the possible solution to this?

Can you explain in more detail exactly what you're doing to cause the
problem?  A short test case you can post would be most helpful.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Context-switch storm in 8.1.15

2008-12-30 Thread Scott Marlowe
On Tue, Dec 30, 2008 at 4:02 AM, Iñigo Martinez Lasala
imarti...@vectorsf.com wrote:
 Hi everybody.

 Recently our company has been granted with a contract for an on-line store
 mainteinance.
 The website has been developed under J2EE and Postgres 8.1 as database
 backend. The system has been working without problem for several month,
 but with Christmas access to web portal has raised a lot.
 The database suffers of a performance problem on high load. Lot of context
 switch happens reaching up to 200.000 cs per second.
 This system is a 16GB, 4 CPU intel Xeon MP with HT enabled and a RAID10
 iSCSI storage, kernel 2.4.21 (RHAS 3).

 Half of CPU power is lost on system time, as you can see.

 Vmstat on high load
 19  0  0 281852 150316 13732396003280 1071 128209 41
 43 16  0
 75  0  0 282040 150316 1373239600 0 0  719 148023 40
 38 22  0
  3  0  0 284208 150324 137324120016   484  728 145371 39
 40 21  0
 12  0  0 278364 150324 13732508008056  660 157533 35
 42 23  1
  6  0  0 284972 150324 137325800032   200  685 142014 39
 41 20  0
  8  0  0 296424 150324 137326240040   136  554 139601 41
 39 20  0
 85  0  0 265004 150324 13732664003248  642 142437 48
 32 20  0
 32  0  0 267432 150324 1373268000 0   788 1003 144409 37
 42 21  0
 13  0  0 270468 150324 1373267600 024  724 146663 42
 40 19

 Vmstat after 20 seconds after stopping portal:
  8  0  0 962388 206744 1377154800 0 0  131 199784 11
 38 51  0
  3  0  0 970212 206744 1377154800 0  1856  305 203639 12
 40 48  0
 10  0  0 975036 206744 1377158800 0   128  212 201899 11
 36 52  0
  3  0  0 970272 206744 137716520016   232  685 202672 14
 41 44  0
  6  0  0 1008320 206744 1377165600 040  198 196298 14
 46 39  0
  3  0  0 1034836 206744 1377165600 0 0  147 202731 12
 39 50  0
  3  0  0 1037764 206752 1377165600 0   952  202 202933 11
 39 50  0
  5  0  0 1078132 206752 1377165600 0 0  154 203408 18
 35 47  0
  6  0  0 1110572 206752 1377165600 0 0  153 196864 18
 41 41  0
  4  0  0 1105440 206752 137718240016   592  461 207538 12
 37 51  1


 I've read about this problem with version prior 8.2. However at this
 moment is not possible to migrate to 8.2 due to the amount of stored
 procedures and  we don't have time enough to test ALL procedures in order
 to migrate to 8.2 (or 8.3).
 However we have performed light tests with 8.2 on high load an this
 problem has been solved or mitigated.

Are you using connection pooling, or do you have a whole bunch of
connections at once?  How many connections do you have that are idle
versus active?

 Now the question. Is there any backport patch for 8.1 that solves
 context-switch storm?

It's far more likely that a back ported 8.1.x would have problems than
you'd run into issues with 8.2 or 8.3 with stored procs.  I'd skip 8.2
and go straight to testing on 8.3.  We upgraded from 8.1 to 8.3 on our
production database.  The only issue we had was that a lot of implicit
casts had been removed, and some older code relied on an explicit date
:: text cast that had gone away.  Since relying on date being a text
string is bad form anyway, we fixed the code and went on from there.

Usually when something like this doesn't get back patched, it's
because the code base was so different in that area that backporting
it represents a real danger to the code stability.

If you upgrade to 8.3 you're upgrading to a stable release that solves
your problems.  If you backport the patch to 8.1 you're running a
version tested only by you.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] postgres block_size problem

2008-12-30 Thread Scott Marlowe
On Tue, Dec 30, 2008 at 12:56 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Scott Marlowe scott.marl...@gmail.com writes:
 On Tue, Dec 30, 2008 at 5:28 AM, Bhujbal, Santosh
 sbhuj...@starentnetworks.com wrote:
 SQL execution failed, Internal error. SQL Statement: (COPY command failed
 with error: ERROR:  row is too big: size 8200, maximum size 8136)

 Hmmm.  What exactly are you trying to insert into what?  Normally when
 you see something like this it's an index on a text field that causes
 this type of problem.

 It's not an index because the error message would say so.  Evidently
 it's a row that TOAST is unable to squeeze down to a workable size,
 which suggests a very large number of columns.  I'd suggest rethinking
 the table schema ...

Oh yeah, that does sound like wide tables.  Yeah, it's likely highly
denormalized or something like that.

I thought the error message was different, but I haven't seen it in years... :)

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] ssl database connection problems...

2008-12-29 Thread Scott Marlowe
On Mon, Dec 29, 2008 at 2:23 PM, Carol Walter walt...@indiana.edu wrote:
 Hello,

 I've just created a new instance of postgres.  It's running an a Sun server
 running Solaris 10.  I configured it with ssl using port 5433.  The server
 starts and runs.  I can connect to it from the local host and list the
 databases, connect to them etc.  I can't connect to the database instance
 from a remote host.  I get a message as follows:

Are you connecting via unix sockets or tcp/ip sockets locally?

 walt...@cat:~$ psql -h db -U walterc -d walterc -p 5433
 psql: could not connect to server: Connection refused
Is the server running on host db and accepting
TCP/IP connections on port 5433?

What does listen_addresses say in postgresql.conf for this instance?

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] vacuum monitoring question

2008-12-24 Thread Scott Marlowe
On Wed, Dec 24, 2008 at 9:31 AM, Kevin Kempter
kev...@consistentstate.com wrote:
 Hi All;

 I'm working on a vacuum monitor script to alert us when tables start to grow
 out of control per dead space. Initially I excluded the system catalog
 schemas (pg_catalog, pg_toast, etc) however I wonder if maybe I should
 monitor these as well. PLus I'm looking for any input as to specifically
 what to look for - I've calculated a 'fill factor' which shows the average
 page fill as a percentage but I wonder what other metrics may help monitor
 possible problem tables effeciently..

It's a good idea to keep an eye on system catalogs, especially if you
have a fair bit of DDL going on in your usage.  I'd take a look at
what the check_postgresql.pl script does.  No need to reproduce all
that work on your own.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] vacuum monitoring question

2008-12-24 Thread Scott Marlowe
On Wed, Dec 24, 2008 at 12:09 PM, Kevin Kempter
kev...@consistentstate.com wrote:

 Thanks for the feedback - where can I find the check_postgresql.pl script ?

http://bucardo.org/check_postgres/

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Psql errors

2008-12-23 Thread Scott Marlowe
On Tue, Dec 23, 2008 at 11:10 AM, Carol Walter walt...@indiana.edu wrote:
 Well, this is sort of interesting.  I'm running Solaris 10.  The problem was
 that I didn't export all of the necessary shared libraries.  What's
 interesting is that it seems to matter what order these are in, in the path.
  If I export LD_LIBRARY_PATH and I put the library identified as the
 libdir in the arguments for ./configure first, it works, if I put it at
 that end it doesn't.

What, exactly, do your export statements look like?

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] What is VACUUM waiting for?

2008-12-14 Thread Scott Marlowe
On Sun, Dec 14, 2008 at 11:27 AM, Michael Monnerie
michael.monne...@is.it-management.at wrote:
 On Samstag 13 Dezember 2008 Matthew T. O'Connor wrote:
 Do you have any vacuum delay setting turned on?  Perhaps they are too
 high?

 Oh, I didn't remember there were some for normal vacuum, just the
 auto_vaccuum.

 vacuum_cost_delay = 250
 vacuum_cost_page_hit = 1
 vacuum_cost_page_miss = 10
 vacuum_cost_page_dirty = 20
 vacuum_cost_limit = 1000

 Are those values to high/low? What should one use?

 I think I will disable it at this one server, setting
 vacuum_cost_delay = 0
 because it's only one DB here. Still, I'd like to understand what values
 would be reasonable on a server with many DBs, as we have some.

Yeah, any vacuum cost_delay over 20 or so is pretty high.  I set it to
10 and vacuum doesn't get in the way but still runs reasonably fast.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] install 8.2.9

2008-12-09 Thread Scott Marlowe
On Tue, Dec 9, 2008 at 3:11 PM, rocio mejia [EMAIL PROTECTED] wrote:
 where are the source to install postgres 8.2.9. ? is someone alive in this
 group list?

Is there a reason you need a specific older version?  is 8.2.10 not
old enough?  Is there a reasons to avoid 8.2.11 that I don't know of?

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Connection Limit Exceeded

2008-12-09 Thread Scott Marlowe
On Tue, Dec 9, 2008 at 4:10 PM, Steve Kecskes [EMAIL PROTECTED] wrote:
 Hi,

 Just wondering what actually happens when the connection limit exceeds.

 Dec 10 05:19:45 pallas1 postgres[373]: [2-1] FATAL:  connection limit
 exceeded for non-superusers

 Are the requests that are rejected pooled and processed when connection
 slots are available?

Nope they're refused and you have to come back at a later time and try again.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Change user password

2008-12-09 Thread Scott Marlowe
On Fri, Dec 5, 2008 at 8:42 AM, Dan Scott [EMAIL PROTECTED] wrote:
 Hi,

 Is is possible to modify a user password using a command line script?
 I know about the ALTER USER command from within the client, but this
 looks like it's quite insecure, storing the password in the history. I
 know it's possible to insert the encrypted password with ALTER USER
 but that means generating the hash elsewhere. A simple 'alteruser -P
 username' which then prompts for the user password would be nice. Does
 this exist?

\password from the psql prompt does that.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Vacuum Problems

2008-12-08 Thread Scott Marlowe
On Mon, Dec 8, 2008 at 6:04 AM, Rafael Domiciano
[EMAIL PROTECTED] wrote:
 Hello guys,
 I tried to modify my vacuum routine, and started to only run vacuum verbose
 analyze diary followed by a reindex weekly.
 But I still having problems in my database. The uptime database is hard to
 stay below 10.
 I'm thinking that my hardware is not more good as it was sometime ago.
 The machine is a: 2 x Intel Xeon Dual-Core 2.3 GHz, 2 Gb RAM. The load on
 machine is about at 1 transactions / m
 Maybe I need more RAM memory?

Likely you need more hard drives and / or a quality caching RAID
controller.  What's your I/O subsystem look like now?

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Vacuum Problems

2008-12-08 Thread Scott Marlowe
On Mon, Dec 8, 2008 at 6:04 AM, Rafael Domiciano
[EMAIL PROTECTED] wrote:
 Hello guys,
 I tried to modify my vacuum routine, and started to only run vacuum verbose
 analyze diary followed by a reindex weekly.

Have you tried running autovacuum with a naptime of 10 or 20?

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Vacuum Problems

2008-12-08 Thread Scott Marlowe
On Mon, Dec 8, 2008 at 9:11 AM, Rafael Domiciano
[EMAIL PROTECTED] wrote:
 How I do to see my I/O subsystem? vmstat? If so, follow:

No, I mean, how many drives do you have, what kind of RAID controller,
if any, how they're configured, and so on.  :)  Sorry wasn't really
clear there was I?

The vmstat numbers are horrible by the way, assuming they're in 1k
blocks, you're reading and writing at  1 Meg a second.  It almost
looks like a degraded RAID-5 array.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Vacuum Problems

2008-12-08 Thread Scott Marlowe
On Mon, Dec 8, 2008 at 9:29 AM, Rafael Domiciano
[EMAIL PROTECTED] wrote:
 1 Drive Only. This server has no RAID.
 Do you think that the I/O is very high and I'm needing a RAID?!

Not necessarily.  Like I said, my laptop currently is about 25 to 30
times faster writing to disk than your server.  So, I think something
is wrong.  Try doing this, run vmstat 1, and while that's running, in
another window, do something like:

time dd if=/dev/zero of=/mnt/myslowdrive/testfile bs=100 count=500
time dd of=/dev/null if=/mnt/myslowdrive/testfile

and see what vmstat says while that's running, and how long it takes.
I'm guessing something is very wrong and you'll get really low
numbers.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Vacuum Problems

2008-12-08 Thread Scott Marlowe
On Mon, Dec 8, 2008 at 10:17 AM, Rafael Domiciano
[EMAIL PROTECTED] wrote:
 Here's the output. While the process was running my database get sometime
 without doing anything.
 You said that I probably get low numbers, but what numbers?

We're looking for MB/s and the bi/bo fields in bmstat (blocks in / blocks out)

 [EMAIL PROTECTED] banco]# time dd if=/dev/zero of=/banco/testfile bs=100
 count=500
 500+0 records in
 500+0 records out
 5 bytes (500 MB) copied, 85,4485 seconds, 5,9 MB/s
 real1m25.451s
 user0m0.003s
 sys 0m1.617s
 [EMAIL PROTECTED] banco]# time dd of=/dev/null if=/banco/testfile
 976562+1 records in
 976562+1 records out
 5 bytes (500 MB) copied, 47,5025 seconds, 10,5 MB/s
 real0m47.543s
 user0m0.457s
 sys 0m1.470s
 And the vmstat output:

 procs ---memory-- ---swap-- -io --system--
 -cpu--
  r  b   swpd   free   buff  cache   si   sobibo   incs us sy id
 wa st
  0  8 111760  56432  19812 130569200   744   832 2488  1518  5  3 46
 46  0

Here, the bi/bo numbers are 768 in and 832 out.  That's k/second we're
looking at.

Lots of the same numbers cut out.

 procs ---memory-- ---swap-- -io --system--
 -cpu--
  r  b   swpd   free   buff  cache   si   sobibo   incs us sy id
 wa st
  0  6 111760  55452  18528 12964120088 25040 1843  1436  2  3 34
 61  0
  1  7 111760  52352  18540 12964720036 15220 1467   944  3  2 27
 68  0
  0  6 111760  54152  18552 12965680088 42296 1609   822  1  2 38
 59  0
  1  8 111760  52412  18580 129675200   172 27052 1567   950  2  3 27
 69  0
  0  6 111724  53904  18632 1297008  3200   580  1504 1902  2320  5  6 23
 66  0
  1  6 111724  54280  18660 129720040   200  1060 2014  1783  5  3 44
 48  0
  1  7 111676  51388  16340 130234880   156  1212 1684   848  2  2 53
 42  0
  1  6 111668  55040  14864 130104804   152 46328 1595  5108  1  5 40
 54  0

Now we're showing that we can write to the disk at 25 to 42 Megs a
second, not too bad.  But it looks really bursty, like it can sustain
this throughput for only a few seconds.   Try writing a larger file
and run vmstat 10 or 60 as well and see what the average over a longer
time with a larger file is.  I get a feeling your machine has a hard
time sustaining throughput for some reason.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] recover or move databases

2008-12-08 Thread Scott Marlowe
On Mon, Dec 8, 2008 at 4:27 PM, PJ [EMAIL PROTECTED] wrote:
 I am trying to recover postgres databases from a crashed disk.
 I have set up a new installation of FreeBSD 7.0 with postgresql8.3.
 The old databases are accessible when the repaired disk is mounted :
 /oldfsb/local/pgsql/data/base/.
 How can I move these databases to the new /usr/local/pgsql/data/base/ ?
 Old and new are both v. 8.3
 Copying does not do it.
 This seems to be one way, but it seems rather long... and complicated..
 Any help would be appreciated,

Not copy, you need to start the postmaster on the old directory, use
pg_dump or pg_dumpall to get the data out, the stop the db and start
it back up on the new directory and use psql or pg_restore.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] upgrade via rpm

2008-12-07 Thread Scott Marlowe
On Sun, Dec 7, 2008 at 1:23 PM, Gerd Koenig [EMAIL PROTECTED] wrote:
 Hello,

 we're planning an upgrade from Postgres 8.3.1 to latest 8.3.5 via rpm
 (Opensuse 10.3 - 64bit).
 Is it really that simple ?

 1.) stop cluster (e.g. pg_ctl stop)
 2.) perform the upgrade (rpm -Uvh *.rpm)
 3.) start the cluster (pg_ctl start)

 thanks in advanceGERD.

yep  but.
0.) take a backup  -- always just in case

assuming the rpms are built from the same .spec (i.e. things like
integer timestamps don't change) then it is that easy.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] [GENERAL] Planner picking topsey turvey plan?

2008-12-06 Thread Scott Marlowe
what does explain analyze yourqueryhere say?

On Sat, Dec 6, 2008 at 1:33 PM, Glyn Astill [EMAIL PROTECTED] wrote:
 Anyone?


 --- On Fri, 5/12/08, Glyn Astill [EMAIL PROTECTED] wrote:

 From: Glyn Astill [EMAIL PROTECTED]
 Subject: [GENERAL] Planner picking topsey turvey plan?
 To: [EMAIL PROTECTED]
 Date: Friday, 5 December, 2008, 2:23 PM
 Hi people,

 Does anyone know how I can change what I'm doing to get
 pgsql to pick a better plan?

 I'll explain what I've done below but please
 forgive me if I interpret the plans wrong as I try to
 describe, I've split it into 4 points to try and ease
 the mess of pasting in the plans..


 1) I've created a view orders that joins
 two tables credit and mult_ord
 together as below:

 CREATE VIEW orders AS
   SELECT b.mult_ref, a.show, MIN(a.transno) AS
 lead_transno, COUNT(a.transno) AS
 parts, SUM(a.tickets) AS items,
 SUM(a.value) AS value
   FROM (credit a LEFT OUTER JOIN mult_ord b ON a.transno =
 b.transno)
   GROUP BY b.mult_ref, a.show;



 2) And an explain on that view comes out as below, it's
 using the correct index for the field show on
 credit which doesn't look too bad to me:

 DB=# explain select a.artist, a.date, b.mult_ref, b.items,
 b.parts from (show a inner join orders b on a.code = b.show)
 where b.show = 357600;
QUERY PLAN
 
  Nested Loop  (cost=15050.79..15099.68 rows=1013 width=70)
-  Index Scan using show_index01 on show a
 (cost=0.00..8.37 rows=1 width=26)
  Index Cond: (code = 357600::numeric)
-  HashAggregate  (cost=15050.79..15071.05 rows=1013
 width=39)
  -  Nested Loop Left Join  (cost=0.00..15035.60
 rows=1013 width=39)
-  Index Scan using credit_index04 on
 credit a  (cost=0.00..4027.30 rows=1013 width=31)
  Index Cond: (show = 357600::numeric)
-  Index Scan using mult_ord_index02 on
 mult_ord b  (cost=0.00..10.85 rows=1 width=17)
  Index Cond: (a.transno = b.transno)
 (9 rows)



 3) Then I have a table called show that is
 indexed on the artist field, and a plan for listing the
 shows for an artist is as below, again this doesn't look
 too bad to me, as it's using the index on artist.

 DB=# explain select * from show where artist =
 'ALKALINE TRIO';
  QUERY PLAN
 -
  Bitmap Heap Scan on show  (cost=9.59..582.41 rows=153
 width=348)
Recheck Cond: ((artist)::text = 'ALKALINE
 TRIO'::text)
-  Bitmap Index Scan on show_index07
 (cost=0.00..9.56 rows=153 width=0)
  Index Cond: ((artist)::text = 'ALKALINE
 TRIO'::text)
 (4 rows)



 4) So.. I guess I can join show -
 orders, expecting an index scan on
 show for the artist, then an index scan on
 orders for each show.

 However it seems the planner has other ideas, it just looks
 backwards to me:

 DB=# explain select a.artist, a.date, b.mult_ref, b.items,
 b.parts from (show a inner join orders b on a.code = b.show)
 where artist = 'ALKALINE TRIO';
  QUERY PLAN
 
  Hash Join  (cost=1576872.96..1786175.37 rows=1689
 width=70)
Hash Cond: (a.show = a.code)
-  GroupAggregate  (cost=1576288.64..1729424.39
 rows=4083620 width=39)
  -  Sort  (cost=1576288.64..1586497.69
 rows=4083620 width=39)
Sort Key: b.mult_ref, a.show
-  Hash Left Join
 (cost=321406.05..792886.22 rows=4083620 width=39)
  Hash Cond: (a.transno = b.transno)
  -  Seq Scan on credit a
 (cost=0.00..267337.20 rows=4083620 width=31)
  -  Hash
 (cost=160588.80..160588.80 rows=8759380 width=17)
-  Seq Scan on mult_ord b
 (cost=0.00..160588.80 rows=8759380 width=17)
-  Hash  (cost=582.41..582.41 rows=153 width=26)
  -  Bitmap Heap Scan on show a
 (cost=9.59..582.41 rows=153 width=26)
Recheck Cond: ((artist)::text =
 'ALKALINE TRIO'::text)
-  Bitmap Index Scan on show_index07
 (cost=0.00..9.56 rows=153 width=0)
  Index Cond: ((artist)::text =
 'ALKALINE TRIO'::text)
 (15 rows)

 Any idea if I can get around this?






 --
 Sent via pgsql-general mailing list
 ([EMAIL PROTECTED])
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




 --
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin




-- 
When fascism comes to America, it will be draped in a flag and
carrying a cross - Sinclair Lewis

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:

Re: [ADMIN] autovacuum benchmarking ...

2008-12-04 Thread Scott Marlowe
On Wed, Dec 3, 2008 at 3:34 PM, AmitKumar Jain [EMAIL PROTECTED] wrote:
 Hi Team

 1. Can we know the how much time will take ongoing vacuum process  in DB?
 2. Any benchmarking tool for vacuum process.
 3. Can we know what exact operations has been done by Vacuum process.
 4. I studied that only VACUUM process doesnot lock the table but due to it
 insertion in table gets very slow. when i killed vacuum daemon insertion get
 faster is it bug in 8.3.3


 I know I should not ask all these questions in one go but i need urgently
 its answer as vacuum process is just screwing my production database. I
 would thankful for any suggestion on above  question...

At least they're all related, and you're not spamming a half dozen
pgsql lists, so really, it's fine to put them all together.

1: Vacuum takes as long as it takes.  On faster machines it runs
faster.  Generally it's very I/O dependent, so lots of hard drives
will make it run faster.  It's also influenced by how much other
activity is going on.  Since it doesn't lock, it's not a big deal as
long as it finishes in a reasonable amount of time.
2: \timing
vacuum tablename;
3: logging, I believe you have to turn log_min_messages down (up?) to
get log entries. also look in pg_stat_user_tables.
4: You need to look into the settings autovacuum_vacuum_cost_delay and
vacuum_cost_delay.  Setting them to 10 or 20 or at most 30 or 40 will
slow down vacuum enough that it shouldn't too heavily impact the rest
of the system.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] User info?

2008-12-03 Thread Scott Marlowe
On Wed, Dec 3, 2008 at 8:30 AM, Carol Walter [EMAIL PROTECTED] wrote:
 Is there a way to tell when the last time a database in a cluster was
 accessed?  I have a large number of student databases that are probably
 inactive, but I don't want to just destroy.  I'd like to archive these, but
 I need to be able to tell when they were last accessed?

No direct method from within the db I know of, but you could use
oid2name / a list of db relids then go into the data/base directory
and see when the last time the files in the individual db directories
had been updated.  That won't tell you the last time they were
ACCESSED, but will give you an idea if they're being updated or not.
Note that vacuum may change the last update times so it's not
foolproof by any means.

The other method is to log connections and then scrape the logs.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Vacuum Problems

2008-11-26 Thread Scott Marlowe
2008/11/26 Rafael Domiciano [EMAIL PROTECTED]:
 The database has around 40 Gb.
 If I not use vacuum full everyday the database get very slow.

 There is no deadlock on the database.

You didn't mention  if you were using autovacuum or not.  You also
didn't mention whether or not you'd tried regular vacuums if for some
reason you can't use autovacuum.  Vacuum full is very intrusive and
can cause index bloat.  It's more of a repair operation than a routine
maintenance one.

What do the last 10 or so lines of vacuum verbose run as a super user say?

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Vacuum Problems

2008-11-26 Thread Scott Marlowe
On Wed, Nov 26, 2008 at 10:21 AM, Rafael Domiciano
[EMAIL PROTECTED] wrote:
 I'm not using autovacuum. Regular vacuum goes ok.
 To see the last 10 lines of verbose i will need to run vacuum tonight
 If a run a reindex before the vacuum full, increase the speed of doing
 vacuum? I found something about it googling.

You should look into enabling autovac.  You can set the sleep
parameter to 10 or 20 to keep it from hogging your I/O bandwidth.
Also, reindex AFTER the vacuum full, not before.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Vacuum Problems

2008-11-26 Thread Scott Marlowe
On Wed, Nov 26, 2008 at 12:54 PM, Matthew T. O'Connor [EMAIL PROTECTED] wrote:
 Rafael Domiciano wrote:

 I'm not using autovacuum. Regular vacuum goes ok.
 To see the last 10 lines of verbose i will need to run vacuum tonight

 If a run a reindex before the vacuum full, increase the speed of doing
 vacuum? I found something about it googling.

 It might help a bit, but by the end of VACUUM FULL you would need to run
 reindex again as VACUUM FULL tends to cause a lot of index bloat.   It is
 normal for tables to have some slack space, so if you do a regular vacuum
 every day (or let autovacuum) it's normal for the table to be a bit bigger
 than after a VACUUM FULL, but they should ready steady state and stop
 growing.

But there are certain use cases that would be classified as
pathological in nature, that you can't handle with regular vacuum.
It's only when you've proven that that's your case, and you can't
program around it, that you should start using vacuum full though.
vacuum full is so expensive in terms of time the system is largely
unusable combined with the need to run reindex or replace the whole
thing with cluster, that if regular or autovacuum can handle the load,
then that's what you do.

I've only seen cases where things like large imports were using a
shared table where it would get bloated insanely if three or four
imports were running at the same time with the occasional update with
no where clause.  Since you can't truncate the table, because it's
shared with other imports, you have to vacuum it, but if you bloat it
by 10x or 100x normal size in 30 seconds, no amount of regular
vacuuming will help.

So, users need to understand why they're always asked if they're
running autovacuum or not.  It's like asking someone with a
nonfunctioning dryer if they've cleaned the lint trap.  It's just
something we assume someone should try first unless there's a good
reason not to.  Because it does work so well most of the time.  I run
autovacuum.  I also email myself the output from vacuum verbose every
week, to look through and see how the tables are looking.  A quick
look near the end tells you if you're mostly ok, and quick
investigation can find bloated tables pretty fast.

So, for the OP, have you tried autovacuum, and why aren't you using
it.  Most of the time people aren't running it it's for erroneous
reasons.

Also, look into updating to 8.3 or above.  With its HOT update
mechanism, and autovacuum enabled by default it handles these
situations quite easily.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Separate Sessions?? (View data - Query tool)

2008-11-25 Thread Scott Marlowe
2008/11/25 Csaba Együd [EMAIL PROTECTED]:
 Hi,
 I'm administering a Pg8.3 database with PgAdmin3.

 The users need views pre filtered on a session based vairable wich is
 stored in a temporary table. Immediately after logging into the database the
 client software creates a temporary table and stores an idin that. The users
 query views when they want to see the contents of a filtered table and
 modify the tables directly - which tables have BEFORE UPDATE/DELETE/INSERT
 trigers to check the existence of the Temp table and to force filter field
 values to be defaulted to the temporary table field. I hope it is clear...
 :)

 When I log in with PgAdmin I have to create the temp table manually. It's
 ok, and if I use the query tool i cansee query the views after creating the
 temp table. But if I want to see the contents of a view in table view (View
 Data-View all rows), the views are empty indicating that it lacks of the
 temporary table. When I view directly the table (not the view) and try to
 edit a field the trigger refuses it because of the lack of temp table.

I'm guessing that you're creating tables like:

create temporary table blah blah blah

and can't see them from other connections?  That's normal.  If you
want to be able to see tables from multiple sessions you need to
create regular tables.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Best option for expanding beyond 1 pg server in this situation

2008-11-25 Thread Scott Marlowe
On Tue, Nov 25, 2008 at 8:48 PM, Jamie Tufnell [EMAIL PROTECTED] wrote:
 Hi,

 We have been wanting to HA our pgsql server for some time now.  We're
 actually reaching the limits of our current server now, so there is a
 pressing need to do something to ease the load too.

 There are several individual solutions to both of these problems.  I
 was hoping I could get some advice from someone more experienced
 though, about what the simplest most effective way of moving forward
 would be?  Hopefully I can squash both HA and LB with the same
 solution.

 So far I've narrowed it down to

 Slony-I master/slave replication with failover

We use this where I work and it works quite well.  We have some custom
scripts that detect failure of the master db and initiates failover to
the slave if necessary.  We do this from the applicaiton level.

 pgpool-II with heartbeat for failover.  The pgpool-II configuration
 matrix (http://pgpool.projects.postgresql.org/) isn't terribly clear
 (there's no legend) but it leads me to believe failover and load
 balancing are mutually exclusive options.  Is that so?

Seems to be.  I haven't played with pgpool in a long time so don't
take my word as authoritative.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] move from mysql to postgresql

2008-11-19 Thread Scott Marlowe
On Mon, Nov 17, 2008 at 3:35 AM, Gerd König [EMAIL PROTECTED] wrote:
 Hello,

 we're going to switch from mysql (ver. 5.0.26) to PostgreSQL (OS:
 opensuse 10.3).
 The mysql database includes several instances, and we only want to
 migrate the bugzilla instance to Postgres (yes, it's the bugzilla
 backend...).
 What are the preferred steps to perform this migration ? (cost free
 solutions are highly appreciated ;-) ).

The general method is to dump the schema and data separately, then
hand mangle the schema as necessary to make it work in pgsql and then
import the data, with whatever mangling needed there to fix the
inevitably messed up data, like dates of -00-00 which make no
sense whatsoever.

However, since bugzilla supports pgsql directly, I'd imagine you can
use the bugzilla population scripts to create your schema in pgsql and
then you only have to massage the data dump from mysql-bugzilla to
make it work.

If you are familiar with sed, it's awefull useful for fixing up broken
data.  Or you can toss together a script in one of the P or R
languages (Perl, PHP, Python, Ruby et. al.) to fix it up.  Usually
it's pretty easy, but if you get stumped on some part post back with
questions and somebody should jump on it.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Problems With pg_dumpall and OIDs

2008-11-19 Thread Scott Marlowe
On Wed, Nov 19, 2008 at 4:05 PM, Tim Gustafson [EMAIL PROTECTED] wrote:
 A workaround: temporarily turn to on the value off the
 default_with_oids parameter in your configuration file
 (the defaults is off).

 I tried that, but still got warning messages about no OID columns when I 
 imported the data into 8.2.  :(  Unless maybe I'm doing something wrong.  I'm 
 getting ready to try again shortly.

Did you reload / restart postgrseql after making the change?  When you
psql into the server, what does

show default_with_oids;

say?

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] max_files_per_process limit

2008-11-11 Thread Scott Marlowe
On Tue, Nov 11, 2008 at 5:10 AM, Dilek Küçük [EMAIL PROTECTED] wrote:

 On Mon, Nov 10, 2008 at 4:51 PM, Achilleas Mantzios
 [EMAIL PROTECTED] wrote:

 Στις Monday 10 November 2008 16:18:37 ο/η Dilek Küçük έγραψε:
  Hi,
 
  We have a database of about 62000 tables (about 2000 tablespaces) with
  an
  index on each table. Postgresql version is 8.1.
 

 So you have about 62000 distinct schemata in your db?
 Imagine that the average enterprise has about 200 tables max,
 and an average sized country has about 300 such companies,
 including public sector, with 62000 tables you could blindly model
  the whole activity of a whole country.

 Is this some kind of replicated data?
 Whats the story?

 Actually we had 31 distinct tables but this amounted to tens of billions of
 records (streaming data from 2000 sites) per table a year, so we
 horizontally partition each table into 2000 tables. This allowed us to
 discard one of the indexes that we have created and freed us from periodical
 cluster operations which turned out to be infeasible for a system with tight
 querying constraints in terms of time.

Any chance of combining less used tables back together to reduce the
number of them?  I'd also look at using more schemas and fewer
tablespaces.  Just a thought.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Ideal way to upgrade to postgres 8.3 with less downtime

2008-11-11 Thread Scott Marlowe
On Thu, Nov 6, 2008 at 11:04 PM, Swathi S [EMAIL PROTECTED] wrote:
 Hi,

 I have a database of size approx 400G. It's a 24/7 database. Only few tables
 have regular reads and writes. The other tables just store information.

 I would like to upgrade to 8.3 and would like to know the best way to
 upgrade with less downtime.

Assuming you're running 8.0 or higher now, slony should provide you
with a one stop upgrade path with the minimum amount of downtime.  We
use it for our database which is smaller (~20 Gigs) but quite hard
working, with access around the clock, and we switched out 8.1 to 8.3
with abut 1 minute of downtime.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] installing without shell access

2008-11-10 Thread Scott Marlowe
On Mon, Nov 10, 2008 at 8:50 AM, Dana Holland
[EMAIL PROTECTED] wrote:
 Is it at all possible to get pgsql installed on a system without access to a
 shell?  A friend is wanting me to install a particular bulletin board system
 on her website; however, her hosting plan only provides the MySQL database -
 the bulletin board requires pgsql.  And her hosting plan has no ssh access.

 So, is it possible to install the database with only ftp access to the
 account?

most web languanges ala php allow you to execute arbitrary commands
(using back ticks in php) so theoretically she could probably do it.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] installing without shell access

2008-11-10 Thread Scott Marlowe
You do NOT have to have root priviledges.  I've built postgresql more
than once without it and made it work.

What you do need is to install it to your local home directory where
the apache user has priviledges and set LD_LIBRARY_PATH (or whatever
it's called, it has been a while) and you're golden.

On Mon, Nov 10, 2008 at 11:49 AM, Aras Angelo [EMAIL PROTECTED] wrote:
 gmake install requires root privileges.
 It wont install with regular httpd user (apache, nobody etc.) on a php-shell
 script or php system command functions.


 On Mon, Nov 10, 2008 at 10:08 AM, Thomas Pundt [EMAIL PROTECTED] wrote:

 Scott Marlowe schrieb:

 On Mon, Nov 10, 2008 at 8:50 AM, Dana Holland
 [EMAIL PROTECTED] wrote:

 Is it at all possible to get pgsql installed on a system without access
 to a
 shell?  A friend is wanting me to install a particular bulletin board
 system
 on her website; however, her hosting plan only provides the MySQL
 database -
 the bulletin board requires pgsql.  And her hosting plan has no ssh
 access.

 So, is it possible to install the database with only ftp access to the
 account?

 most web languanges ala php allow you to execute arbitrary commands
 (using back ticks in php) so theoretically she could probably do it.


 That reminds me having seen something like a shell implemented in PHP:
 google for php-shell. I've never used such a beast, though.

 Ciao,
 Thomas

 --
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin





-- 
When fascism comes to America, it will be draped in a flag and
carrying a cross - Sinclair Lewis

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] rebellious postgres process

2008-11-04 Thread Scott Marlowe
On Tue, Nov 4, 2008 at 8:48 AM, Laszlo Nagy [EMAIL PROTECTED] wrote:
 Tom Lane wrote:

 Huh, that's weird.  We've fixed some bugs in the past that led the stats
 collector to consume excessive CPU --- but that was all pre-8.3.


 The server was rebooting intermittently, so we replaced the RAM (we got a
 kernel page fault). But it was a week ago. The server is now stable. But is
 it possible that somehow the file system became inconsistent, and that is
 causing an infinite loop in the stats collector? Just guessing.

Yes, you really can't trust any data that was written to the drives
while the bad memory was in place.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] rebellious postgres process

2008-11-04 Thread Scott Marlowe
On Tue, Nov 4, 2008 at 11:46 AM, Tom Lane [EMAIL PROTECTED] wrote:
 Scott Marlowe [EMAIL PROTECTED] writes:
 On Tue, Nov 4, 2008 at 8:48 AM, Laszlo Nagy [EMAIL PROTECTED] wrote:
 The server was rebooting intermittently, so we replaced the RAM (we got a
 kernel page fault). But it was a week ago. The server is now stable. But is
 it possible that somehow the file system became inconsistent, and that is
 causing an infinite loop in the stats collector? Just guessing.

 Yes, you really can't trust any data that was written to the drives
 while the bad memory was in place.

 Still, it's quite unclear how bad data read from the stats file could
 have led to an infinite loop.  The stats file format is pretty flat
 and AFAICS the worst effect of undetected corruption would be to have
 wrong count values for some tables/databases.

True.  Is it possible some other bit of the data in the system was
corrupted and freaking out the stats collector?

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Crash with pg_clog file not found

2008-11-03 Thread Scott Marlowe
On Mon, Nov 3, 2008 at 8:49 AM, Matthieu Roger [EMAIL PROTECTED] wrote:

 8.3.3 was also producing the same error, prior version (8.3.1) did not
 seem to exhibit it, though we've opened a new universe in the web game
 which increased the number of accounts and players in september so
 maybe this triggers the problem.

I know you probably don't want to hear this right now, but PostgreSQL
can handle a much higher load under some flavor of unix than it can
under windows.  Luckily, it's pretty easy to set up a machine running
Centos5, Ubuntu 8.x or some other flavor of linux.  Due to basic
architectural differences, the difference isn't likely to change soon.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] pg_dump's table is empty

2008-11-02 Thread Scott Marlowe
On Sun, Nov 2, 2008 at 4:29 PM, Daniel Punton [EMAIL PROTECTED] wrote:
 I am dumping postgres 7 tables under the postgres account  mydb# pg_dump -O
 -a -t mytable  mytable.out and am getting either
 empty dumps ( actual db tables are populated) or no dump at all.
 I have file write permissions and this process has worked in the past.
 Any suggestions what might be wrong?

Can you post an empty dump, or describe it more thoroughly?

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] autovacuum questions

2008-10-31 Thread Scott Marlowe
On Fri, Oct 31, 2008 at 10:02 AM, Alvaro Herrera
[EMAIL PROTECTED] wrote:
 Rafael Domiciano escribió:

 The vacuum is not full, to run a full vacuum you need to set a cron job;
 better to run at the night.

 Doing VACUUM FULL on crontab is rarely a good idea.  If you find
 yourself in a situation where you need VACUUM FULL, then you've not
 tuned regular vacuum appropriately.

There are some use cases where vacuum full is appropriate.  But
they're rare.  And they're usually better off being added to whatever
script is doing the thing that causes the database to need vacuum
full.  Certain batch processing or data loading processes need vacuum
full and or reindex when finishing.

But the common thought process on vacuum full for most people is If
vacuum is good, vacuum full must be even better! which is just wrong.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] SSL and md5 password?

2008-10-29 Thread Scott Marlowe
On Wed, Oct 29, 2008 at 11:07 AM, Peter Koczan [EMAIL PROTECTED] wrote:
 Hi all,

 I'm looking to add an md5-based user to a postgres server, and I can't
 seem to get psql to connect using SSL. Normally I use Kerberos to
 connect, and that works flawlessly with SSL. I'd much prefer to use
 SSL connections, so I'm wondering if there's something I'm doing
 wrong.

 Here's what happens with a Kerberos connection...works just fine,
 connects using SSL:

 $ /s/postgresql/bin/psql -h dumbo.cs.wisc.edu -p 5432 koczan_test
 Welcome to psql 8.3.4, the PostgreSQL interactive terminal.

 Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help with psql commands
   \g or terminate with semicolon to execute query
   \q to quit

 SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)

 koczan_test=

 And an md5 connection...

 $ /s/postgresql/bin/psql -h dumbo.cs.wisc.edu -p 5432 -U test_md5 koczan_test
 Password for user test_md5:
 psql: FATAL:  no pg_hba.conf entry for host 128.105.162.36, user
 test_md5, database koczan_test, SSL off

According to this, you're trying to connect with SSL off, right?  I
don't think you've got any matching lines for that in your
pg_hba.conf.  But I could be reading that wrong.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] pgadmin not processing scheduled jobs

2008-10-24 Thread Scott Marlowe
On Fri, Oct 24, 2008 at 3:00 PM, Robert Young [EMAIL PROTECTED] wrote:
 I've solved the issue.  pgagent.pga_job.jobhostagent must be blank.  Thanks
 to this link http://forums.enterprisedb.com/posts/list/1419.page

 Anyway, the docs say ... optionally the hostname of a specific machine
 running pgAgent, if this job should only run on that specific server. If
 left empty, any server may run the job. This is not normally an issue for
 SQL-only jobs, however any jobs with batch/shell steps may need to be
 targetted to a specific server. ..., so why didn't it work when I specified
 the host of 127.0.0.1 or localhost?

I'm just guessing, but it could be that your machine is setup to use
local unix sockets and not allow tcp/ip connections.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PITR question with base backup

2008-10-21 Thread Scott Marlowe
On Tue, Oct 21, 2008 at 2:29 PM, Scott Whitney [EMAIL PROTECTED] wrote:
 It is, is it? I was completely under the impression that it was not. Don't
 ask me where I got that impression. :)

 No problem whatsoever, in that case!

 Thanks for clearing up my inability to comprehend documentation...

This is what I like so much about PostgreSQL.  Most of the surprises
are of the oh wow!  That's cool Not the oh shit! kind.  :)

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] pg_dumpall size

2008-10-21 Thread Scott Marlowe
On Tue, Oct 21, 2008 at 11:56 AM, Marc Fromm [EMAIL PROTECTED] wrote:
 I have the same postgresql databases on two different servers.

 Boteh servers have the same version of postgresql, 8.1.

 The following backup command creates a file twice as big on one server,
 compared to the other server.

 pg_dumpall -c -U postgres | gzip  alldb.gz

Different default compression levels for gzip?  try gzip -6 or
something on both of them.

Otherwise, unzip them and compare sizes to see if they really are the
same unzipped.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Move tables to tablespace

2008-10-20 Thread Scott Marlowe
On Mon, Oct 20, 2008 at 8:10 AM, Campbell, Lance [EMAIL PROTECTED] wrote:
 I would like to have an SQL statement that would move all of the tables and
 indexes found within a particular schema to a particular tablespace.  Is
 there a way I can do this?

I think you'll have to write a script (either external or in plpgsql)
to do this.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PostgresSQL DBA books

2008-10-17 Thread Scott Marlowe
On Fri, Oct 17, 2008 at 12:24 PM, Isabella Ghiurea
[EMAIL PROTECTED] wrote:

 I'm looking for advice in purchasing few good DBA Admin books for
 PostgressSQL , any tips?

Well, start with the online docs, they're up to date and have a lot of
good information in them.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] log activity questions

2008-10-14 Thread Scott Marlowe
On Thu, Oct 9, 2008 at 12:56 PM, Marc Fromm [EMAIL PROTECTED] wrote:
 I started logging on our server. There are many entries like this:

 transaction ID wrap limit is 1073768178, limited by database postgres

 transaction ID wrap limit is 1073771864, limited by database sms

 Each database has several of the above entries.

That's completely normal.  IT's an information log, not a warning or an error.

 Also there are these fatal entries:

 FATAL:  database template0 is not currently accepting connections

 FATAL:  database template0 is not currently accepting connections

 Does the template need to accept connections? If I make a new database based
 on template0 does that mean that database cannot accept connections?

Template0 is your man, I really screwed up template1 get out of jail
free database.  It is normally set to not allow connections (look at
select * from pg_databases; for the field that does or doesn't allow
connections).  If you ever did terrible things to templat1, you could
set template0 to datallowcon=t and then drop and recreate template1
using template0 as the template.  Then vacuum freeze it and set it to
not allow connections again to template0 to get it back to normal
again.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] checkpoint_timeout

2008-10-08 Thread Scott Marlowe
On Wed, Oct 8, 2008 at 12:08 PM, Kevin Grittner
[EMAIL PROTECTED] wrote:

 If the problem is checkpoints (which seems likely but far from a sure
 thing based on the information provided), aggressive background writer
 setting might be your best bet under 8.2.X.  To solve similar problems
 we had to go to the following, although many on these lists feel that
 settings this aggressive are rarely needed, so use at your own risk.

 #bgwriter_delay = 200ms
 bgwriter_lru_percent = 20.0
 bgwriter_lru_maxpages = 200
 bgwriter_all_percent = 10.0
 bgwriter_all_maxpages = 600

I've tuned an 8.2 server with settings similar to this, and it made a
world of difference on smoothing out performance.  Note that bgwriter
tends to use cpu and memory bandwidth up, so avoid going crazy on it.

 Even better would be to go to the latest revision of the 8.3 release,
 which at this writing is 8.3.4.  In that release PostgreSQL spreads
 out the work of a checkpoint to minimize this problem.

Seconded.  A lot of hard work went into making the bgwriter much
easier to adjust, and much less likely to even need adjusting in 8.3.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] replication with table add/removes..

2008-10-08 Thread Scott Marlowe
On Wed, Oct 8, 2008 at 9:10 AM, Martin Badie [EMAIL PROTECTED] wrote:
 Hi,

 I have a system that have constant table adds/removes are present. I want to
 make a replication between master and slaves but not sure which one is the
 best solution for that kind of a situation. Since I am new to replication
 stuff on postgresql I am truly lost but I know that Slony is not an answer
 for replications where table add/remove are present.

I think that pgpool might work for this.  There are probably some
other front end type replication systems that would work too.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] log results

2008-10-08 Thread Scott Marlowe
On Wed, Oct 8, 2008 at 5:04 PM, Marc Fromm [EMAIL PROTECTED] wrote:
 I just started logging postgresql. In the log are these entries:

478 LOG:  transaction ID wrap limit is 1110972072, limited by database
 cswe2

479 LOG:  transaction ID wrap limit is 1110972072, limited by database
 cswe2

Don't worry about the xaction wraparound.  that's just a notice.


480 NOTICE:  number of page slots needed (27072) exceeds max_fsm_pages
 (2)

481 HINT:  Consider increasing the configuration parameter
 max_fsm_pages to a value over 27072.

 Can someone explain what it is and what I need to do to fix it?

You need to consider increasing the configuration parameter
max_fsm_pages to a value over 27072.

The configuration parameters are found in postgresql.conf, which can
be in various places depending on your OS.  Then restart postgresql.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Shared_buffers hint

2008-10-06 Thread Scott Marlowe
On Mon, Oct 6, 2008 at 5:52 AM, Rafael Domiciano
[EMAIL PROTECTED] wrote:
 HI people,
 I'm tuning a server that is a long time abandoned, and this server is a
 little slow.

What version pgsql is it running?  If it's 7.4 or before, you should
look at upgrading it.  If you can't then a shared buffer setting in
the 1000 to 1 range is generally reasonable, but large
shared_buffer settings (i.e. over 1) are counterproductive for 7.4
and before.

 Now, in my postgresql.conf I have:
 shared_buffers = 100Mb

So it's a pretty modern version, since old ones can't read 100Mb as a setting.

 mas_fsm_relations = 15000
 mas_fsm_pages = 160
 max_connections = 300
 And I changin' to:
 shared_buffers = 1000Mb
(The server has 2 Gb of memory)

50% is kind of big.  Generally you either want it small enough that
the OS can do the majority of the caching (it's usually better at
caching large amounts of data) or large enough that the kernel cache
doesn't come into play much.  50% means that everything is buffered
exactly twice.

 mas_fsm_relations = 15000
(The Vacuum noticed me 608 relations)
 mas_fsm_pages = 160
(The Vacuum noticed me 500800 pages)
 max_connections = 300
(I did a ps axf | grep postgres | wc -l and the bash brings to me the
 number of 120 and all the people is not online at now)
 My qustion is about shared_buffers, in postgresql.conf I have: min 128kB or
 max_connections*16kB  # (change requires restart)
 So I did:
 300 * (16Kb / 1024) = 4,69?
 What this result means?

That's just the minimum the server needs to operate.  Not operate
well, just operate.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Shared_buffers hint

2008-10-06 Thread Scott Marlowe
On Mon, Oct 6, 2008 at 7:56 AM, Rafael Domiciano
[EMAIL PROTECTED] wrote:
 I'm using 8.3.3 version in this machine..!
 So, the shared_buffers set to 100Mb is ok? Or 500Mb is better?
 This server we call Reporter Server, so it's used to do heavy reports for
 a few users only (I could say 5 users).
 What could be the best config for my server with:
 2 Gb RAM
 300 GB HD
 Postgres 8.3
 Dual Core

100M is probably adequate.  2G is a pretty small database server
memory wise.  I assume by 300G HD you mean a single hard drive.  Since
a single hard drive is going to limit the speed at which you can
access data from it, I'd leave shared_buffers at 100M and let the OS
cache data for you.  Also, look at work_mem.  You might want to set
those few heavy users to have more work_mem than the other users.

alter user heavyuser set work_mem=128000;

note that work_mem is per user sort, so it's quite possible to exhaust
main memory if you set it high for everybody and they all do sorts on
large sets suddenly.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgres database as a client/server architecture

2008-10-06 Thread Scott Marlowe
On Mon, Oct 6, 2008 at 12:23 AM, Mohammed Rashid [EMAIL PROTECTED] wrote:
 Hi All,

 I have use the Postgres database as a client/server architecture.
 I mean I want to run the clients on different PCs and want that all
 the transactions should get automatically updated in the server.
 It should also take care of network disconnections and update the
 server after network connections automatically.

PostgreSQL already is a client server architecture.  However, it
sounds like what you're actually looking for is some kind of client AS
server method where the clients can operate independently then update
the database at some later date.

PostgreSQL has no facilities to do this on its own.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgres database as a client/server architecture

2008-10-06 Thread Scott Marlowe
On Mon, Oct 6, 2008 at 9:00 AM, Rich [EMAIL PROTECTED] wrote:
 On Mon, Oct 6, 2008 at 10:08 AM, Scott Marlowe [EMAIL PROTECTED] wrote:
 On Mon, Oct 6, 2008 at 12:23 AM, Mohammed Rashid [EMAIL PROTECTED] wrote:
 Hi All,

 I have use the Postgres database as a client/server architecture.
 I mean I want to run the clients on different PCs and want that all
 the transactions should get automatically updated in the server.
 It should also take care of network disconnections and update the
 server after network connections automatically.

 PostgreSQL already is a client server architecture.  However, it
 sounds like what you're actually looking for is some kind of client AS
 server method where the clients can operate independently then update
 the database at some later date.

 PostgreSQL has no facilities to do this on its own.

 Actually it does have those facilities to do such an architecture.
 Its callled psql.  Postgresql's verison of sql. just write sql
 statements to update, delete or add records accordingly. You can do
 this using a web interface, another client db like access or paradox.
 There are tons of ways to do it.  you can use an odbc interface.  So
 postgress does come with everything you need.

Not if what the OP wants is the lotus notes type functionality where
things are updated on the client, and at some later date updated on
the main server and all data inconsistencies are automagically taken
care of.

OTOH, if all the OP wanted was to just have plain old client - server
architecture, yeah, that's built right in to psql / libpq...

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Error while trying to back up database: out of memroy

2008-09-29 Thread Scott Marlowe
On Sun, Sep 28, 2008 at 2:18 AM, Peter Kovacs
[EMAIL PROTECTED] wrote:
 On Mon, Sep 22, 2008 at 4:43 PM, Tom Lane [EMAIL PROTECTED] wrote:
 Vladimir Rusinov [EMAIL PROTECTED] writes:
 But now I'm getting following error:
 pg_dump: WARNING:  terminating connection because of crash of another server
 process

 As a rule of thumb, you should disable OOM kill on any server system.

 This document describes a few solutions potentially better than
 outright disabling:
 http://www.redhat.com/archives/taroon-list/2007-August/msg6.html .
 (I don't know whether those solutions actually work or not, but may be
 worth trying by the look of it.)

While there are better solutions for other types of servers, like web
servers and what not, for PostgreSQL servers, overcommit isn't usually
needed, and OOM killer / overcommit can both be disabled.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Do we need vacuuming when tables are regularly dropped?

2008-09-29 Thread Scott Marlowe
On Mon, Sep 29, 2008 at 11:12 AM, Steve Crawford
[EMAIL PROTECTED] wrote:

 What it sounds like to me is that you're not vacuuming the system
 catalogs, which are getting bloated with dead rows about all those
 dropped tables.


 Wow, great!

 It is not immediately clear from the documentation, but the VACUUM
 command also deals with the system catalogs as well, correct?



 To expand on Tom's answer, rows in system tables are created not only for
 tables but for each column in the table, rules, indexes, etc. You  can end
 up with a lot more row creation than you suspect. And temporary tables bloat
 the system tables just like regular tables. We discovered that cron scripts
 using temporary tables can cause very rapid system-table blotage.

Also, there was a time when you couldn't do vacuum full on system
tables do to locking issues, and had to take the db down to single
user mode to do so.

Tom, is that still the case?

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] postgres at reboot

2008-09-26 Thread Scott Marlowe
On Fri, Sep 26, 2008 at 2:38 PM, Tena Sakai [EMAIL PROTECTED] wrote:
 Hi Scott,

 When I issue: /sbin/chkconfig --list | grep postgres
 it comes back with:

  postgresql_ORG  0:off   1:off   2:off   3:off   4:off   5:off   6:off
  postgresql  0:off   1:off   2:on3:on4:on5:on6:off

 I felt a bit strange that it says 'off' at run level 6.

Run level 6 is reboot, so that's normal.

 I went into /etc/rc.d and issued:
  sudo find . -name \*postgresql\* -ls | grep S98postgresql
 and it came back with:

  156181860 lrwxrwxrwx   1 root root   20 Aug 21 17:00
 ./rc4.d/S98postgresql - ../init.d/postgresql
  156182940 lrwxrwxrwx   1 root root   20 Aug 21 17:00
 ./rc3.d/S98postgresql - ../init.d/postgresql
  156183510 lrwxrwxrwx   1 root root   20 Aug 21 17:00
 ./rc2.d/S98postgresql - ../init.d/postgresql
  156180240 lrwxrwxrwx   1 root root   20 Aug 21 17:00
 ./rc5.d/S98postgresql - ../init.d/postgresql

 Next, I went into /etc/rc.d/rc6.d and typed:
  ls -l
 and it gave me this:
  .   .  ..  .  .   .   . .  ..
  .   .  ..  .  .   .   . .  ..
 lrwxrwxrwx   1 root root   20 Aug 21 17:00 S98postgresq -
 ../init.d/postgresql

 There is an 'l' missing from the name!  I thought for a moment

It shouldn't  be there, sounds like someone added it by hand.

 I found the culprit, but then I issued the command below:
  /sbin/chkconfig --list | grep '6:on'
 and it returned nothing.

 I am a bit confused.  As I understand, run level 6 means, in
 redhat context, shutdown and reboot.  But it seems in my case
 nothing is turned on for level 6.  Then that missing 'l'
 is really of no significance?

Right, nothing should be started for those run levels.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] iso-8859-1 and utf-8

2008-09-25 Thread Scott Marlowe
On Thu, Sep 25, 2008 at 1:59 AM, Claus Guttesen [EMAIL PROTECTED] wrote:
 Hi.

 I found out by accident that my db-dump, at least in ver. 8.3.3,
 automagically converts from iso-8859-1 to utf-8. Our db has been
 around since January 2000 and iso-8859-1 was chosen back then as
 encoding. When I occasionally imported the nightly dump to a test-db
 it would complain very early during the import and abort if the
 encoding was incorrect. I think this was also the case with 8.3.1
 although I'm not certain.

I'm guessing that the client encoding is set to utf-8 instead of
iso-8859-1 on the client end.  pg automatically converts to the client
encoding of the user connecting / environment.

I know that at least 8.2 supports the -E switch to set client encoding.

psql --help
MUCH SNIPPED:
-E, --encoding=ENCODING dump the data in encoding ENCODING

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] missing chunk number 0 for toast value

2008-09-25 Thread Scott Marlowe
On Thu, Sep 25, 2008 at 10:09 AM, BJ Taylor
[EMAIL PROTECTED] wrote:
 PANIC:  right sibling's left-link doesn't match: block 175337 links to
 243096 instead of expected 29675 in index dbmail_headervalue_3
 STATEMENT:  INSERT INTO dbmail_headervalue (headername_id, physmessage_id,
 headervalue) VALUES (4,12335778,'from [76.13.13.25] by
 n6.bullet.mail.ac4.yahoo.com with NNFMP; 25 Sep 2008 04:01:36 -')
 LOG:  server process (PID 13888) was terminated by signal 6: Aborted
 LOG:  terminating any other active server processes
 WARNING:  terminating connection because of crash of another server process

Tom, does postgres generate abort signal?  Or would this be an external signal?

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] postgres at reboot

2008-09-25 Thread Scott Marlowe
On Thu, Sep 25, 2008 at 1:42 PM, Tena Sakai [EMAIL PROTECTED] wrote:
 Hi Everybody,

 About 1.5 month ago, my machine (which runs redhat linux
 2.6.9-78.0.1.ELsmp on Dell hardware with postgres 8.3.3)
 had a terrible crash.  I am mostly recovered, but there
 is at least one more thing that's not right.

 Namely, when the machine gets rebooted, postgres doesn't
 start automatically.  Before the crash, there was no such
 problem.

In RH, you use chkconfig to see what's set to start:

chkconfig --list

will show you all the services and what run levels they come up in.

chkconfig servicename on|off -- will turn a service on or off at boot.

service servicename start -- will start a service.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Hex representation

2008-09-24 Thread Scott Marlowe
I used this very simple little php script to make this

filename: mk55:
#!/usr/bin/php -q
?php
for ($i=0;$i262144;$i++){
print chr(85);
}
?

and ran it:

./mk55  55

ls -l 55
-rw-r--r-- 1 smarlowe smarlowe 262144 2008-09-24 13:41 55

i.e. it's 256k.

And it's attached.

On Wed, Sep 24, 2008 at 1:20 PM, Carol Walter [EMAIL PROTECTED] wrote:
 Hello,

 Does anyone know what the format of hex characters for postgres are?  I'm
 trying to create files that contain a 0x55.  It looks to me like it should
 require a delimiter of some sort between the characters.  I don't know how
 postgres would know that the string was a hex representation and not just a
 character zero, followed by a character x, followed by a character 5,
 followed by a character 5.

 Carol

 --
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin



mk55
Description: Binary data

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Missing pg_clog files

2008-09-24 Thread Scott Marlowe
No, the file needs to be nothing but hex value 55 throughout.  The
attachment I sent earlier is just that:

[EMAIL PROTECTED]:~$ hexdump 55
000        
*
004

i.e. all 55s, all the time, not the ascii numbers 5 and 5, but the hex
value.  The actual file is ascii the letter U over and over:

[EMAIL PROTECTED]:~$ head -c 100 55
UU
UU

and so on.

On Wed, Sep 24, 2008 at 12:09 PM, Carol Walter [EMAIL PROTECTED] wrote:
 To use the hex value 0x55, do I need to enclose it in single quotes?

 Carol

 On Sep 24, 2008, at 11:44 AM, Tom Lane wrote:

 Carol Walter [EMAIL PROTECTED] writes:

 Are the files that contain the hex characters supposed to contain a
 single string and no control characters?

 Yes, you want 256K occurrences of the byte value 0x55 and nothing else.

 I'm also wondering if, after I create the dummy files, and pg_dump
 works, I could restore an old pg_dumpall file and then insert any
 data that aren't there from the pg_dumps.

 It'd be a good idea to do as much cross-checking as you can, since
 it's highly probable that the dumped data will be at least partly wrong.

 No, I'm afraid updating to 8.2.latest won't get you out of this.
 It might possibly prevent a recurrence.

regards, tom lane

 --
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin



-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] error

2008-09-23 Thread Scott Marlowe
On Tue, Sep 23, 2008 at 2:48 PM, Carol Walter [EMAIL PROTECTED] wrote:
 Hi, Folks,
 What would cause this error?
 SQL error:
 ERROR:  could not access status of transaction 10274530
 DETAIL:  Could not open file pg_clog/0009: No such file or directory.

Often it's a problem caused by a virus checker.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Missing pg_clog files

2008-09-23 Thread Scott Marlowe
On Tue, Sep 23, 2008 at 4:35 PM, Carol Walter [EMAIL PROTECTED] wrote:
 Hi, Tena,
 Yes, you do recall correctly.  It is Solaris 10 and Postgres 8.2.3.
 I'm going to run a pg_dumpall and keep my fingers crossed.  I'm getting
 similar errors when I try to do other things like vacuumdb, so maybe
 pg_dumpall won't run either.

You do know that version has known, fixed in later versions, data
eating bugs, right?

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Missing pg_clog files

2008-09-23 Thread Scott Marlowe
On Tue, Sep 23, 2008 at 7:59 PM, Walter,  Carol Williams
[EMAIL PROTECTED] wrote:
 Hi, Tena,

 I actually downloaded 8.3.3 last week.  The upgrade has been in the plan.  I
 didn't realize the latest was 8.3.4. I am concerned though. We have so many
 dependencies that they never go smoothly.

The real priority is getting the latest bug fix release of 8.2
installed.  that only takes a few minutes and is quite easy, just
update the package for pgsql.

Going to 8.3 requires dump restore and testing your app for
compatibility, something you don't have to do going to 8.2.10 or
whatever version is the latest 8.2 release.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Missing pg_clog files

2008-09-23 Thread Scott Marlowe
On Tue, Sep 23, 2008 at 10:16 PM, Tena Sakai [EMAIL PROTECTED] wrote:
 Hi Carol,

 I detect in you some apprehension as to pg_dumpall
 won't run or complete.  Why is that?  Have you already
 done it and it didn't work?  If that's not the case,
 why not run pg_dumpall at a quiet hour and see?

 I think Scott is right as to install the latest
 8.2 on top.  It won't be time consuming task.
 Why not give it a wheel?  It would be good to
 find out one way or the other.


 Scott: Are files  through 002F (which are
 not there) absolutely necessary for recovering data?

Most likely not.  If the db won't start up without them, it might be
possible to create new clog files that are nothing but zeroes.  Never
been in this position though...

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


<    1   2   3   4   5   6   7   8   9   10   >