Re: [ADMIN] Postgres on Windows

2011-01-26 Thread Chris Browne
ammar.fall...@automata4.com (Ammar Fallaha) writes:
 Anyone knows of a group to support Postgres on Windows?

 Obviously your group main concern is Postgres on Linux/UNIX.

 Searching on the internet is not resolving anything SPECIDIC to
 Postgres for Windows.

It's quite possible that most people happen to be running Postgres on
some flavour of Unix.  For a long time, that was the only option, so for
long time users, that has been pretty exclusively true.

But many of the issues are the same, regardless of platform, and so are
entirely appropriate to come here, whether relating to Windows or
otherwise.

If we added pgsql.admin.windows and pgsql.admin.unix lists, this
would likely worsen things because:

 - Issues not relating to platform would fit on pgsql.admin, and so
   users of Windows would need to subscribe to both pgsql.admin and
   pgsql.admin.windows

 - Similarly, Unix folk would be subscribing to pgsql.admin and
   pgsql.admin.unix

 - Sometimes people would get confused, or be uncertain where the
   problem lies, and post Unix or Windows issues on pgsql.admin, making
   this list look much like it is today

Add in extra not-much-worthwhile fodder such as:

 - People cross posting in multiple places, to make sure they're covered

 - People flaming one another because that should have been posted on
   the other list!!!

and you've got a bunch more traffic, without there being any additional
actual information.

I can pretty readily ignore posts about Windows-related issues on
pgsql.admin that don't apply to me; that seems a better thing than
trying to add a deeper layering of lists that is likely to be
counterproductive.
-- 
cbbrowne,@,linuxfinances.info
http://linuxfinances.info/info/multiplexor.html
Tooltips are the proof that icons don't work.
-- Stefaan A. Eeckels

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] State of multi-master replication options

2011-01-20 Thread Chris Browne
scott.marl...@gmail.com (Scott Marlowe) writes:
 On Thu, Jan 20, 2011 at 9:53 AM, CS DBA cs_...@consistentstate.com wrote:
 Hi All;

 I suspect I know the answer to this...

 What's the current state of multi-master replication for PostgreSQL? Is
 Bucardo the only true master/master solution out there that might be worthy
 of a production push?  Is Postres-R a candidate at this point (I suspect
 not)? Are there any other master/master or preferably  multi-master (3+
 masters) solutions available?

 Unless you roll your own with separate sequence ranges and inherited
 tables via slony, yes, Bucardo seems to be it right now.

Sounds right.

There were several attempts to come up with Slony successors doing
multimaster replication; it turns out to be a really hard problem to do
this in a general fashion, and have efficient results.  It probably fits
in with Brewer's CAP Theorem:

  http://en.wikipedia.org/wiki/CAP_theorem

CAP indicates you can choose between Consistency, Availability, and
Partition Tolerance, and you can notably not have all three.

When building would-be general purpose mechanisms for Postgres, it
seems rather undesirable to throw out Consistency; we usually imagine
that being able to require consistency was one of the reasons people
thought it wise to use Postgres in the first place :-).  But retaining
consistency is pretty problematic.

  - Work on one node might be inconsistent with what's happening on
 another node.  Two kinds of examples that would have differing
 kinds of behavior include:
   a) Maintaining foreign key relationships
   b) Managing account balances

  - Evaluating those sorts of consistency so as to *prevent*
 inconsistency is a Mighty Expensive thing to do.  (Slony-II tried
 doing this, and found it punitively expensive to performance, as
 well as biting back at applications.  Postgres-R is on the same
 road, so one may expect certain behaviors there.)

  - Bucardo takes the approach of having application rules where you
 have to pre-code what to do when it discovers inconsistencies when
 trying to apply changes to other nodes.  That's not general
 purpose in the sense that you need to write rules specific to your
 system's behavior.
-- 
http://linuxfinances.info/info/slony.html
Rules of  the Evil Overlord #14. The  hero is not entitled  to a last
kiss, a last cigarette, or any other form of last request.
http://www.eviloverlord.com/

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] preventing transaction wraparound

2011-01-13 Thread Chris Browne
mbro...@gmail.com (Mike Broers) writes:
 Lately I have been paranoid about the possibility of transaction wrap
 around failure due to a potential orphaned toast table.  I have yet to
 prove that I have such an object in my database.. but I am running
 Postgres 8.3 with auto_vacuum enabled and am doing nightly manual
 vacuums as well and cannot explain the results of this query.  Any
 assistance is greatly appreciated.


 Yesterday I ran:

 production=# select datname, age(datfrozenxid) from pg_database;
   datname   |    age    
 +---
  template1  | 100260769
  template0  |  35997820
  postgres   | 100319291
  stage      | 100263734
  production | 100319291

 and today after the nightly vacuum ran I got this:

 production=# select datname, age(datfrozenxid) from pg_database;
  datname   |    age    
 +---
 template1  | 100677381
 template0  |  37594611
 postgres   | 100738854
 stage  | 100680248
 production | 100738770

 Am I just counting down to 2,000,000,000 and the postgresapocolypse?
 Is there a way for me to determine what the actual transaction
 threshold is going to be? I've read the postgresql docs and greg
 smiths section in high performance and have to admit i am having
 difficulty understanding how this number is not retreating after a
 database manual vacuum.

Nothing is likely to be problematic here.

Tuples only get frozen once they're Rather Old.  By default, the
freeze won't happen until the age reaches 150 million.  See the value
of GUC vacuum_freeze_table_age.

   
http://www.postgresql.org/docs/9.0/static/runtime-config-client.html#GUC-VACUUM-FREEZE-TABLE-AGE

I'd expect to see the age increase towards 150M before anything more
happens.

I suggest you poke into this at a bit more detailed level, and peek at
the states of the tables in one of those databases via:

  SELECT relname, age(relfrozenxid) FROM pg_class WHERE relkind = 'r'
  order by 2;

This will tell you which tables have what going on with their freezing.

You could explicitly run VACUUM FREEZE against one or another of the
databases, which would cause all the affected tables' data to get
frozen, and if you did that against all the tables in (say) the
postgres database, you might anticipate seeing the age fall to near 0.
-- 
let name=cbbrowne and tld=gmail.com in String.concat @ [name;tld];;
http://linuxfinances.info/info/
You can measure a programmer's perspective by noting his attitude on
the continuing vitality of FORTRAN. -- Alan J. Perlis

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] GUI ERD/ERM tools?

2010-11-15 Thread Chris Browne
loupicci...@comcast.net (Lou Picciano) writes:
 This question from our users keeps coming up...

 What tools are each of you using for ERD ERM? pgAmin's Graphical Query Builder
 is helpful, but doesn't address the need.

 Can any of you recommend specific tools, preferences, etc?

I have tended to find that when doing design work, nothing is better for
the purposes of discussion of alternatives than a whiteboard and
markers.

Once the thinking has been done, I might use TCM to draft diagrams of
some of the relationships http://wwwhome.cs.utwente.nl/~tcm/ to
compactly present them.

Once the thinking is stable enough to create tables, I like SchemaSpy
for providing diagramming that people can click through to examine the
relationships between tables.  http://schemaspy.sourceforge.net/
-- 
output = (cbbrowne @ gmail.com)
Implying that youcan build systems  without  rigourous  interface
specification is always a powerful selling technique to the clueless.
-- Paul Campbell, seen in comp.object.corba

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] 2 PGSQL instances in the same server

2010-11-04 Thread Chris Browne
rbarr...@gmail.com (Ramiro Barreca) writes:
 We need to have, for migration, either an instance of our actual PG 8.4.4 and 
 a
 new one of PG 9.0 for testing.
 Where can we found a paper for helping us?
 Our platform is Centos 5.4 x86 64 bits

The complication to this is that it is quite likely not simple to do
this using pre-packaged (e.g. - RPM files) Postgres builds.

Anyone that's involved with building Postgres tends to have tooling
handy to help have multiple versions of Postgres around.

I have a script I use for this sort of thing:
https://github.com/cbbrowne/pginit

Greg Smith has something analagous, though more sophisticated:
https://github.com/gregs1104/peg

It amounts to:

  1.  Installing both versions of software;
  2.  Possibly using scripts to help manage where the results get
  installed.

If you're accustomed to entrusting installation totally to RPM-based
installations of Postgres, you're headed into new territory to have to
choose and configure where the installations are done.  But it's really
no big deal - it's not unusual for me to install several different
instances of Postgres in the course of an hour, refreshing some
instances I use to test Slony-I.
-- 
output = (cbbrowne @ linuxfinances.info)
In the  free software world, a  rising tide DOES lift  all boats, and
once the  user has tasted  Unix it's easy  for them to  switch between
Unices. -- david parsons

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Database level encryption

2010-04-07 Thread Chris Browne
terminato...@gmail.com (Timothy Madden) writes:
 Andreas 'ads' Scherbaum adsm...@wars-nicht.de wrote:

 If someone captures the machine the bad guy can install a network
 sniffer and steal the database passwords upon connect.

 I think protecting against a keylogger is a different issue than
 database encryption. Is this why database encryption is not needed
 for PostgreSQL, as people here say ?

No, the nuance is a bit different.

It's not that database encryption is not needed - it's rather that
database encryption doesn't usefully protect against a terribly
interesting set of attacks.

When we think through the scenarios, while encrypting the whole database
might seemingly protect against *some* attacks, that's not enough of the
story:

 - There are various classes of attacks that it doesn't help one bit
   with.

 - In order to have the database accessible to the postmaster process,
   there needs to be a copy of the decryption key on that machine,
   and it is surprisingly difficult to protect that key from someone
   who has physical access to the machine.

This has the result that people are inclined to suggest that encrypting
the whole database mayn't actually be a terribly useful technique in
practice.
-- 
Know how to blow any problem up into insolubility.  Know how to use the
phrase The new ~A system to insult its argument, e.g., I guess this
destructuring LET thing is fixed in the new Lisp system, or better yet,
PROLOG.  -- from the Symbolics Guidelines for Sending Mail

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] pg_dump dumping out some irrelevant grants

2009-07-15 Thread Chris Browne
I'm finding that pg_dumps are dumping out, right near the end, the
following sequence of grants that are causing our QA folk a little bit
of concern:

REVOKE ALL ON SCHEMA public FROM PUBLIC;
REVOKE ALL ON SCHEMA public FROM chris;
GRANT ALL ON SCHEMA public TO chris;
GRANT ALL ON SCHEMA public TO PUBLIC;

The problem isn't anything terribly deep - it's just that there is no
user chris in their environment (chris happens to be one of the
superuser accounts on my workstation ;-)), so that the REVOKE/GRANT
combination raises errors.

Note that:
 - I used the postgres superuser for anything needing superuserness
 - I decline to use sed to filter this out; see the .sig ;-)

Is this an artifact of the fact that chris is the 'base superuser'?
-- 
output = (cbbrowne @ linuxfinances.info)
http://cbbrowne.com/info/postgresql.html
Some people, when confronted with a Unix problem, think ‘I know, I’ll
use sed.’ Now they have two problems.  - j...@lucid.com

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Catching up Production from Warm Standby after maintenance - Please help

2009-07-08 Thread Chris Browne
jenniferm...@hotmail.com (Jennifer Spencer) writes:

 Am I wrong?

 Probably. My first concern is to make sure you aren't doing VACUUM
 FULL as part of your maintenance cycle. That option is meant for
 recovery from extreme bloat, and is sort of a last resort.

 Good - glad to be wrong about that!  We do mostly inserts, no
 updates and very few deletes.  We drop entire tables but don't
 delete often.  We have very long rows, though.  Do you think the
 above is a situation likely to create extreme bloat? 

That kind of situation is one I'd not expect to lead to much, if any
bloat.

The usual cases that lead to bloat is where there are a lot of
updates/deletes.  That is the reason to expect to have pages used that
are nearly empty.

If it's nearly insert-only, then pages would only be nearly empty if
they are nearby pages that are completely full (and hence needed to be
split).  That doesn't seem like a major problem ;-).

 My Sybase experience with extreme bloat was that it was caused by a
 three-field clustered index in a very long short-row table over time
 (~a year).  This job doesn't use clustered indexes.

PostgreSQL doesn't have clustered indexes of that sort, so that
problem won't emerge ;-).

 I thought we had to do vacuum full to avoid transaction ID
 wraparound/reset issues?  We do have a lot of transactions, a whole
 lot.  Are you saying that most admins avoid VACUUM FULL as much as
 possible?  What about XID?

No, you don't need to use VACUUM FULL to avoid ID wraparound.  Plain,
NOT-FULL vacuum, which does not block things, handles that perfectly
well.  No need to relate FULL vacuum with that.
-- 
let name=cbbrowne and tld=cbbrowne.com in String.concat @ [name;tld];;
http://linuxdatabases.info/info/advocacy.html
There's a  new language called C+++.   The only problem  is every time
you try to compile your modem disconnects.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Security question UNIX Sockets vs CIDR style access

2009-06-01 Thread Chris Browne
kev...@consistentstate.com (Kevin Kempter) writes:
 I'm looking for thoughts/feedback on the use of UNIX Sockets vs standard CIDR 
 style access (i,e, Ident, MD5, etc) to a Postgres Cluster. What are the
 pros/cons, which is more secure and why, etc...

There is no single answer, which is essentially why there is the whole
array of access methods.

Each has reasons to be preferable under particular circumstances, and
there is a fair bit of documentation on this in the standard
documentation.  

Please see:
   http://www.postgresql.org/docs/8.3/static/auth-methods.html
-- 
let name=cbbrowne and tld=linuxdatabases.info in String.concat @ 
[name;tld];;
http://cbbrowne.com/info/sgml.html
Over a hundred years ago, the German poet Heine
 warned the French not to underestimate the power of ideas:
 philosophical concepts nurtured in the stillness of a
 professor's study could destroy a civilization.
--Isaiah Berlin in /The Power of Ideas/

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] How to run PostgreSQL?

2009-05-08 Thread Chris Browne
j...@commandprompt.com (Joshua D. Drake) writes:
   * Why should I have to configure a custom init.d script so my
  PostgreSQL will start?
 
 to gain control over the system issue.  From my perspective pg has always
 been at the enterprise layer,

 This surprises me a bit. In my experience Enterprises don't compile from
 source and those that do, push it into a package so they can manage it.

 Wait... there is one place in the enterprise where I could see this
 being the case. Places that are running legacy systems like AIX or HPUX
 that don't necessarily even have a proper package management system.

There are two other scenarios that I have seen that have led us to
generally compile from sources, both with common features:

  a) Slony-I expects to be built alongside the database.

 In principle, the relevant compiled components *could* be added
 in via PGXS, but nobody has had the time to set that up; it
 hasn't seemed that valuable to do.

  b) We've got some internal data types that expect to be built in
 contrib/

 Again, we could presumably use PGXS to evade this...  Mumble...

It would be *conceivable* to build RPMs or something, but that seems
more trouble than it's worth for, erm, AIX :-).
-- 
select 'cbbrowne' || '@' || 'cbbrowne.com';
http://linuxdatabases.info/info/multiplexor.html
Bother, said Pooh as he struggled with sendmail.cf.
It never does quite what I want.
I wish Christopher Robin were here.. 

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Updating a very large table

2009-04-24 Thread Chris Browne
rafael.domici...@gmail.com (Rafael Domiciano) writes:
 Hello Kevin, Thnks for response,
 Doing the alter table to add the new column was fast: ALTER TABLE table1 ADD 
 COLUMN new_column date;
 The problem is that I have to do a update in this column, and the values are 
 going to be the a misc of others 2 columns of the table1, something
 like this:
 update table1
 set new_column = (date)
 where
   new_column is null;
 Postgres Version: 8.3.6
 Os.: Fedora Core 9
 4 Gb Ram

If you try to do this in one swell foop, it's going to take hours,
lock anything else that would want to access the table, and bloat the
table, all of which is exactly what you don't want...

I'd suggest doing the updates in more bite-sized pieces, a few
thousand tuples at a time.

Further, to make that efficient, I'd suggest adding an index, at least
temporarily, on some column in the table that's generally unique.  (A
date stamp that *tends* to vary would be plenty good enough; it
doesn't need to be strictly unique.  What's important is that there
shouldn't be many repeated values in the column.)

Thus, the initial set of changes would be done thus...

  alter table1 add column new_column timestamptz;
  create index concurrently temp_newcol_idx on table1(quasi_unique_column) 
where (new_column is null);

It'll take a while for that index to be available, but it's not really
necessary to use it until you have a lot of tuples converted to have
new_column set.

Then, run a query like the following:

  update table1 set new_column = [whatever calculation]
  where new_column is null and
quasi_unique_column in
  (select quasi_unique_column from table1 where new_column is null 
limit 1000);

This should be repeated until it no longer finds any tuples to fix.
Once this is complete, the temporary index may be dropped.

The number 1000 is somewhat arbitrary:

  - 1 would be bad, as that means you need to do 8 million queries to
process an 8M tuple table

  - 800 would be bad, as that would try to do the whole thing in
one big batch, taking a long time, locking things, bloating
things, and consuming a lot of memory

1000 is materially larger than 1, but also materially smaller than
800.

Using 1, instead, would mean more work is done in each
transaction; you might want to try varying counts, and stop increasing
the count when you cease to see improvements due to doing more work in
bulk.  I doubt that there's a material difference between 1000 and
1.

Make sure that the table is being vacuumed once in a while; that
doesn't need to be continuous, but if you want the table to only bloat
by ~10%, then that means you should vacuum once for every 10% of the
table.
-- 
(reverse (concatenate 'string gro.mca @ enworbbc))
http://www3.sympatico.ca/cbbrowne/advocacy.html
Mary had a little lambda
A sheep she couldn't clone
And every where that lambda went
Her calculus got blown

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] password strength verification

2008-12-18 Thread Chris Browne
rexma...@yahoo.com (Rex Mabry) writes:
  If a company requires a password to be a combination of letters,
  numbers and special characters. Oracle has a profile setting with a
  password verify function that can be used to specify a function
  that can do this.  Does postgres have a setting or function to
  verify and enforce a password policy?  I am very familiar with
  pg_hba.conf, but I am looking specifically at passwords.

If I were wanting to enforce this, I think I'd do it via PAM.

That is, I would configure PostgreSQL to use the PAM service (METHOD =
pam, OPTION = name of PAM service), and configure these requirements
into the PAM service.

There are several alternative indirections available:
 - LDAP authentication would allow you to manage password policy
   in the LDAP instance, quite independent of PostgreSQL.

 - krb5 indicates use of Kerberos, which would, again, keep passwords
   out of PostgreSQL altogether.

With all of these options being readily available for using
centralized authorization management and policy, I don't see any
particular value in duplicating low level security policy mechanisms
in PostgreSQL.
-- 
output = (cbbrowne @ cbbrowne.com)
http://linuxdatabases.info/info/postgresql.html
Well, I wish  you'd just  tell me rather   than trying to engage   my
enthusiasm, because I haven't got one. -- Marvin the Paranoid Android

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] change user passwd

2008-12-02 Thread Chris Browne
[EMAIL PROTECTED] (Isabella Ghiurea) writes:
 I'm using PG 8.3.4 , what's the method for remote users to change
 their PG passwd .

The method is to use the ALTER USER command.

   http://www.postgresql.org/docs/8.3/static/sql-alteruser.html
-- 
(format nil [EMAIL PROTECTED] cbbrowne acm.org)
http://cbbrowne.com/info/lisp.html
The classic  Common Lisp defmacro is  like a cook's knife; an elegant
idea which seems dangerous, but which experts use with confidence. 
-- Paul Graham, _On Lisp_

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] replication with table add/removes..

2008-10-08 Thread Chris Browne
[EMAIL PROTECTED] (Martin Badie) writes:
 Hi, I have a system that have constant table adds/removes are
 present. I want to make a replication between master and slaves but
 not sure which one is the best solution for that kind of a
 situation. Since I am new to replication stuff on postgresql I am
 truly lost but I know that Slony is not an answer for replications
 where table add/remove are present.  Best Regards.

The only way that is at all supportable right now in that regard is to use PITR:
   http://www.postgresql.org/docs/8.3/interactive/continuous-archiving.html

That does NOT allow you to query the backup nodes; in order to do so,
you must activate them, which amounts to failing over to the backup
node.

If your system doesn't have a stable schema, that seems like a
problem...
-- 
select 'cbbrowne' || '@' || 'linuxfinances.info';
http://linuxdatabases.info/info/wp.html
Objects  Markets
Object-oriented programming is about the modular separation of what
from how. Market-oriented, or agoric, programming additionally allows
the modular separation of why.
-- Mark Miller

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] replica of database

2008-09-11 Thread Chris Browne
[EMAIL PROTECTED] (Joshua D. Drake) writes:
 Aftab Alam wrote:
 Yes ,I want a replica of my db so that I can use it as failover



 Version 7.3 linux AS release 4 update 6

 Version 7.3 is end of life and I don't know of *any* of the
 replication technologies that will work with it.

Slony-I version 1.1 can still work with 7.3.  We ceased support of 7.3
when we released v1.2, but recommended, at the time, using 1.1 for
7.3-compatibility.

  http://archives.postgresql.org/pgsql-announce/2006-10/msg00012.php
-- 
output = (cbbrowne @ linuxfinances.info)
http://linuxfinances.info/info/emacs.html
A LISP programmer knows the value of everything, but the cost of
nothing. -- Alan J. Perlis

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] replica of database

2008-09-11 Thread Chris Browne
[EMAIL PROTECTED] (Brad Nicholson) writes:
 On Thu, 2008-09-11 at 12:19 -0600, Scott Marlowe wrote:
 On Thu, Sep 11, 2008 at 8:30 AM, Chris Browne [EMAIL PROTECTED] wrote:
  [EMAIL PROTECTED] (Joshua D. Drake) writes:
  Aftab Alam wrote:
  Yes ,I want a replica of my db so that I can use it as failover
 
 
 
  Version 7.3 linux AS release 4 update 6
 
  Version 7.3 is end of life and I don't know of *any* of the
  replication technologies that will work with it.
 
  Slony-I version 1.1 can still work with 7.3.  We ceased support of 7.3
  when we released v1.2, but recommended, at the time, using 1.1 for
  7.3-compatibility.
 
 And as  buggy as slony 1.0 might have been, I ran it for about 2 years
 in production replicating a HUGE amount of data daily with zero
 failures. Of course, the servers were fast and reliable, so that might
 have helped cover a lot of issues other people had for us.

 And when we ran it we had replica's getting corrupted due to bugs almost
 weekly (based on a particular pattern of activity).  The edges are
 there.  It did never lose data on us.

Mind you, that was on 7.4, not 7.3.

It's possible that:
 a) 7.4 did new stuff, so that those index corruptions would not have been
present in 7.3, but also that
 b) 7.3 might have data-eating problems not present in 7.4.

I *would* suggest using 1.1.[latest], of Slony-I, as that should have
the fewest issues, on the Slony-I side, of any version available to
run against PG 7.3, and should work the most cleanly.

But on the other hand, I'd *also* strongly urge using this to get off
of v7.3 and onto something a LOT newer, ASAP.  I think you can get to
PostgreSQL 8.1 using the 1.1 branch, which is usefully newer :-).
-- 
let name=cbbrowne and tld=acm.org in String.concat @ [name;tld];;
http://www3.sympatico.ca/cbbrowne/unix.html
Economists are still trying to figure out why the girls with the least
principle draw the most interest.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Clustering 2 EDBs On windows

2008-05-09 Thread Chris Browne
[EMAIL PROTECTED] (Sunitha S) writes:
 Hi All,
 We have setup edb-edb replication by configuring the master on one machine 
 while slave on another machine(both running on Windows OS and
 postgres plus Advanced 8.3) for achieving the remote master/slave setup as 
 per the following document
 http://www.enterprisedb.com/documentation/edb-to-edb-replication.html
 We were able to do the following
 1.Successfully register the EDB-Replication as windows service on both Master 
 and Slave machines
 2.Add the master and slave engines to EDB-Replication (through customized 
 mater.conf and slave.conf files)
 But the document talks about some slonik scripts for registering the 
 nodes,registerion sets and subscriptions.
 These scripts are shell scipts, hence how to run these scripts on windows or 
 the steps followed are proper in achieving the remote master/slave
 replication.

I believe that EDB-Replication is some form of customization of the
Slony-I replication system, as documented at http://slony.info/.

There may be relevant documentation at http://slony.info/; if that
is not of assistance, you will likely need to contact EnterpriseDB for
support, as they are the only ones particularly aware of and familiar
with any changes or customizations they may have done.
-- 
output = (cbbrowne @ linuxfinances.info)
http://linuxdatabases.info/info/linuxdistributions.html
Those who do not learn from history, loop.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Tuning

2008-04-07 Thread Chris Browne
[EMAIL PROTECTED] (Carol Walter) writes:
 I have some questions about tuning.  The PostgreSQL documentation
 says that you don't need to worry about index maintenance and tuning
 with PostgreSQL.  I'm used to systems that work a lot better if they
 are periodically re-orged or re-indexed.  Is it true that one need
 not be concerned with this?  I'm certain that the databases must
 require some human intervention.  What kind of tuning or other
 intervention are you doing.  What kind of tools are available and are
 being used.

Well, there is one side to things where yes, indeed, maintenance is
quite necessary, and that being in the area of vacuuming.

http://www.postgresql.org/docs/8.3/static/routine-vacuuming.html

The need to 'reindex' or 'reorg' tables is not non-existent, however
it is needed *way* less frequently than was the case in much older
versions of PostgreSQL.  e.g. - with v7.2, there were patterns of
updates that would leave portions of indexes not usable, but the issue
was rectified in ~7.4, and people have not been observing problems
relating to this former scenario.

Back when we had systems on v7.2, we had to shut down every few months
and reindex some tables in order to keep performance OK.  That's no
longer the case with systems running on v8.1, and as we bring 8.3 into
production, I expect even less need for manual interventions.

If you are running VACUUM and ANALYZE often enough, and autovacuum
pretty much does so, now, then there shouldn't be much need to do
re-orging of the system.
-- 
let name=cbbrowne and tld=acm.org in String.concat @ [name;tld];;
http://linuxfinances.info/info/nonrdbms.html
In the name of the Lord-High mutant, we sacrifice this suburban girl
-- `Future Schlock'

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] best practices for separating data and logs

2008-01-02 Thread Chris Browne
[EMAIL PROTECTED] (Tom Lane) writes:
 Peter Koczan [EMAIL PROTECTED] writes:
 I'm planning a lot of changes for migrating to PostgreSQL 8.3, among
 them being a better way of separating data and logs (transaction logs,
 that is).

 Currently, the OS and log data are on one disk system, and the data
 (including configs) are on the other disk system. After creating the
 database cluster, I copy the pg_xlog directory to the OS system and
 symlink it from the database.

 So, I'm wondering...

 - Are there any best practices, or better practices, than symlinking?

 I believe 8.3's initdb has an explicit option for making pg_xlog be a
 symlink to someplace.  The results aren't different from doing it
 manually, but it saves a step (and a chance for mistake).

Yes, indeed.

  -X, --xlogdir=XLOGDIR location for the transaction log directory

I had not been aware of this one; this seems like a nice feature to
add support for in an init script...

We've been using an init script that offers a whole bunch of options,
originally due to Drew Hammond.

These days, the actions offered are thus:
  [start|reload|stop|env|mkdir|initdb|logtail]

start/stop/reload are pretty traditional.  The other options are
pretty interesting, particularly for cases where you might want to:

 - Frequently create databases from scratch, as when testing
   CVS HEAD
 - Alternatively, to help establish common policies, for the less
   frequent cases.

  env:   Sets up PATH, MAN_PATH, PGPORT with the values used by
 the backend in this init file

  mkdir: Sets up all the directories required both for DB backend and
 for logging

  logtail:  runs tail -f on the last log file for the cluster

  initdb:  Runs initdb, pointing at particular directories, and with
 particular configuration policy.

 I have recently augmented this by making it smart enough to
 rewrite the postgresql.conf file (using sed) to establish
 default values for a dozen or so options that tend to get
 customized with fairly common values.

 In effect, the entire cluster configuration gets set up in
 about a 10-line section near the top of the script.

 Adding in an option to redirect pg_xlog seems like a mighty
 fine idea; I know that on various occasions, I have had the
 irritation of building clusters and having to go to some
 fragile manual effort to shift pg_xlog somewhere else.

 Automating THAT seems like it's a no-brainer as far as
 being an excellent idea...

I probably ought to ask around for permission to release this; it
seems like it's probably useful enough (I have been using it a lot)
that it ought to be in a CVS repository somewhere, rather than
languishing on my desktop.
-- 
(format nil [EMAIL PROTECTED] cbbrowne linuxfinances.info)
http://linuxdatabases.info/info/linuxxian.html
I am not a Church numeral!
I am a free variable!

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] Dump database more than 1 flat file

2007-12-23 Thread Chris Browne
[EMAIL PROTECTED] (Phillip Smith) writes:
 If doesn't have another way, how can I put the
 header in the begin of file without open?
 With cat  command I put in the end.

 After the split files are loaded on the new server file system:
   cat splitfile*.txt  bigfile.txt
   psql  bigfile.txt


 THINK BEFORE YOU PRINT - Save paper if you don't really need to print this

 ***Confidentiality and Privilege Notice***

 The material contained in this message is privileged and confidential to
 the addressee.  If you are not the addressee indicated in this message or
 responsible for delivery of the message to such person, you may not copy
 or deliver this message to anyone, and you should destroy it and kindly
 notify the sender by reply email.

 Information in this message that does not relate to the official business
 of Weatherbeeta must be treated as neither given nor endorsed by Weatherbeeta.
 Weatherbeeta, its employees, contractors or associates shall not be liable
 for direct, indirect or consequential loss arising from transmission of this
 message or any attachments
 e-mail.

You should think before you attach these sorts of confidentiality notices.

If you put them on messages like this that are *OBVIOUSLY* being sent
to publicly-available mailing lists, this may undermine future claims
of material being supposedly-confidential.

In other words, by WRONGLY attaching confidentiality notices, courts
might decide to ignore them even in cases where you imagined they
ought to be legitimate...
-- 
let name=cbbrowne and tld=linuxfinances.info in String.concat @ 
[name;tld];;
http://linuxfinances.info/info/x.html
The cigars in Los Angeles that were Duchamp-signed and then smoked.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Enhancement request

2007-11-30 Thread Chris Browne
[EMAIL PROTECTED] (Jonah H. Harris) writes:
 On Nov 30, 2007 4:30 PM, Tom Lane [EMAIL PROTECTED] wrote:
 For what?

 Convenience.

 AFAICS we are moving as fast as we can in the direction of auto vacuum
 and analyze.  Adding more frammishes to the manual commands seems like
 gilding the buggy whip.

 Autovacuum will never be the be all end all.

And why is does it not suffice to do the following?

SCHEMA=billing
for table in `psql -qt -d my_database -c select table_name from 
information_schema.tables where table_schema = '${SCHEMA}' and table_type = 
'BASE TABLE';`; do
psql -d my_database -c vacuum analyze ${SCHEMA}.${table};
done

I don't see a need to add more to the SQL grammar when the above can
be done in 4 lines of shell script.

It seems to me that if you actually *NEED* to do 'sophisticated
logic-driven' VACUUMing, then you are already headed down a road where
you will need to have:

a) A script

b) Some query criteria, whether in the DBMS, or purely within the
shell, to handle the logic bit.

Once you're there, you have *AT LEAST* the 4 lines of script that I
suggested, if not considerably more.

Interestingly, the .sig chosen below actually seems somewhat germaine
to this...

What you're asking for, whether it's gilding the buggy whip or
adding frammishes to manual commands [1], is, in fact, MORE that
you're suggesting it is.  You're not merely looking for a frammish,
you're proposing that it is meaningful for us to encourage a policy of
vacuuming on a per-schema basis.  That's not merely a mechanism to let
the user do what they want - that's policy.

In contrast, while what is in my little script represents policy, as a
whole, none of the components represent policies in and of themselves.

Footnotes: 
[1]  I *love* the way Tom phrased that; that sentence is going into my
personal fortunes file...
-- 
output = reverse(ofni.sesabatadxunil @ enworbbc)
http://www3.sympatico.ca/cbbrowne/languages.html
People consistently decry  X for doing precisely what  it was designed
to do: provide a mechanism to allow *OTHERS* to build GUI systems.
-- John Stevens [EMAIL PROTECTED] 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] Migrating from 32 to 64 bit

2007-11-26 Thread Chris Browne
[EMAIL PROTECTED] (Medi Montaseri) writes:
 But theoretically speaking, 32 or 64-bit ness of the application (ie the 
 postmaster server) should not influence the data types offered by a 
 particular DB
 version. That is the semantics of data types and cpu-arch (register width, 
 big endian, little endian, sparc, mips, x86), etc ) offered by a particular DB
 version should be orthogonal.
 A practical example is when I first begin my business on a Mac, then I move 
 the database to a Sun and then on to a mainframe

That's well and fine, but the point is that when those (reasonably
generic!) data types get compiled into code for a particular platform,
with particular endianness and word size, how it is optimal to
represent them will vary based on the characteristics of the platform.

As a result, not only do you need different executable binaries each
platform, but you also need different binary database structures on
each platform.

A conceptually mitigating factor should be that by the time you get
around to changing platforms, there will likely be a Newer, Better,
Major PostgreSQL version available.  So you should be considering
doing a migration from old, slower, inferior version on the inferior
platform to the better, stronger, faster version on the superior
platform.  With THAT context, the need to run initdb is further
given.
-- 
output = (cbbrowne @ linuxdatabases.info)
http://www3.sympatico.ca/cbbrowne/x.html
Have  you noticed   that,  when we were   young,  we were   told that
`everybody else   is doing  it' was   a  really  stupid reason  to  do
something,  but now it's the standard  reason for picking a particular
software package? -- Barry Gehm 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] Installing PostgreSQL as Admin

2007-10-30 Thread Chris Browne
[EMAIL PROTECTED] (Nandakumar Tantry) writes:
     Does any-ne know how to install PostgreSQL with
 Administrator account? If not how will I do it? Any documentation
 available?:p

I believe there is documentation on this matter here:
   http://pginstaller.projects.postgresql.org/faq/FAQ_windows.html
-- 
(format nil [EMAIL PROTECTED] cbbrowne cbbrowne.com)
http://linuxdatabases.info/info/emacs.html
If you can't see the bright side of things, polish the dark side...

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] ROI on 7.2 to 8.2

2007-08-21 Thread Chris Browne
[EMAIL PROTECTED] (Medi Montaseri) writes:
 I have finally convinced my manager to consider the idea of
 migrating from 7.2 to 8.2. Obviously he needs an ROI (Return Of
 Investment) analysis (in a technical sense).  So, I am wondering if
 anyone has worked on a similar project where you had to go thru the
 release notes and bug fixes to compile such a motivational speech
 and willing to share.  Otherwise, as many links you can provide
 would be nice.

You can simply look at the release notes for 7.3, 7.4, 8.0, 8.1, and
8.2.

You can find them here:
  http://www.postgresql.org/docs/8.2/static/release.html

More specifically, you can find them at these URLs.

http://www.postgresql.org/docs/8.2/static/release-7-3.html
http://www.postgresql.org/docs/8.2/static/release-7-4.html
http://www.postgresql.org/docs/8.2/static/release-8-0.html
http://www.postgresql.org/docs/8.2/static/release-8-1.html
http://www.postgresql.org/docs/8.2/static/release-8-2.html

Print off the contents of those five URLs, and you'll find somewhere
between 40 and 50 pages worth of _brief listings_ of bug fixes and
enhancements.  Drop that pile of paper on his foot and ask if he
thinks it seems heavy enough to seem significant.

Add to that the other folks' arguments:

 - There be database eating bugs there that be fixed in later
   releases.
 - Nobody is interested in supporting problems you have with such an
   ancient version.
-- 
select 'cbbrowne' || '@' || 'cbbrowne.com';
http://www3.sympatico.ca/cbbrowne/internet.html
First Fact About Networks
Pipes can be made wider, but not shorter
-- Mark Miller

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] use replication on a database only on selected schemes?

2007-08-08 Thread Chris Browne
[EMAIL PROTECTED] (Stefan Zweig) writes:
 does anybody know whether it is possible to replicate only selected
 schemes from a certain database using the slony-I replication
 package, while other schemes from the same database do not get
 replicated?

In the Slony-I replication system, you choose which tables are
replicated, so if there are tables that mustn't be replicated, you
simply don't choose those ones when setting up replication.
-- 
cbbrowne,@,cbbrowne.com
http://cbbrowne.com/info/lisp.html
Do you know where your towel is?

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] Raw disk space used

2007-08-01 Thread Chris Browne
[EMAIL PROTECTED] (Carol Walter) writes:
 Is there a way to tell what the raw disk space used by a single
 database is?  I know that databases are fluid, but if there is way to
 do even a snap shot view that is a ball park figure, I'd be happy.
 My user keeps clammering for this figure.

Sure, you can identify the database via select oid, * from
pg_catalog.pg_database;

Then you should be able to head to $PGDATA (where ever the database
data lives), and run du, and search for the directory whose name is
the oid value for the database that you wanted to analyze.

If users are using tablespaces, then tables can live in
user-controlled places, which would make it rather more complex to do
this analysis, but if they have kept to the simpler approach of just
letting data fall where it will, this should do the trick...
-- 
(reverse (concatenate 'string gro.mca @ enworbbc))
http://linuxfinances.info/info/linux.html
High-level languages are a pretty good indicator that all else is
seldom equal. - Tim Bradshaw, comp.lang.lisp

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] linux os

2007-07-11 Thread Chris Browne
[EMAIL PROTECTED] (Mohd Ghalib Akhtar) writes:
 how to download linux 7.3 image file(means os) 

According to kernel.org, the latest version of Linux is 2.6.22.1.
That's a long ways from some non-existent version 7.3...
-- 
let name=cbbrowne and tld=linuxfinances.info in String.concat @ 
[name;tld];;
http://linuxdatabases.info/info/oses.html
For example, if errors are  detected in  one of  the disk drives,  the
system  will allow  read-only access to   memory until the  problem is
resolved.  This,  PE claimed,  prohibits   a damaged disk   drive from
entering errors into the system.  -- Computerworld 8 Nov 82 page 4.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] Help me find a suitable replication solution

2007-07-05 Thread Chris Browne
[EMAIL PROTECTED] (Yuan HOng) writes:
 It seems Slony-II would be quite promising, but I can't wait that
 long. What is the best choice for me now?

Worth noting: Slony-II would never have been suitable, as it would
have required that all the master servers be in the same data
centre.

I don't think you have an easy answer here...
-- 
select 'cbbrowne' || '@' || 'cbbrowne.com';
http://cbbrowne.com/info/internet.html
(THASSERT (PLANNER RG))
-- Example of HACKER statement.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] replication between linxu postgresql and Windows postgresql

2007-06-25 Thread Chris Browne
[EMAIL PROTECTED] (Shoaib Mir) writes:
 Slony can handle this all.

No it can't.  It will not work with versions of PostgreSQL earlier
than 7.3.3, because it requires namespace functionality that did not
stabilize until that point.
-- 
(reverse (concatenate 'string moc.enworbbc @ enworbbc))
http://www3.sympatico.ca/cbbrowne/lsf.html
Rules of the  Evil Overlord #147.  I will  classify my lieutenants in
three   categories:  untrusted,   trusted,  and   completely  trusted.
Promotion to the third category can only be awarded posthumously.
http://www.eviloverlord.com/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] database creation date

2007-06-25 Thread Chris Browne
[EMAIL PROTECTED] (Pradeep Chandru) writes:

   
   
   Hi,
   
   I 
 have two questions.
   
   1. Is there a way to find the 
 database creation time?

Elderly timestamps on files in the database's directory would be an
approximation to that.

   2. Is WAL implementation possible in postgres7.1.1? If so
   simple steps for the same? This is required to migrate from
   the older setup to the latest version in parallel without
   disturbing the current setup.

There isn't a reasonable way, as far as I know, to 'replicate' from
PostgreSQL 7.1 to anything newer.  

You would need to:

1. shut down all apps accessing the database, 

2. do a pg_dump to extract current data, 

3. load that into a newer version of PostgreSQL,

4. verify that all looks good, and

5. point the apps to the new database on the new version.

If the database is fairly large, the time between steps 1 and 5 could
be fairly substantial.

If you were on a newer version of PostgreSQL, say 7.3, you might be
able to use something like Slony-I to replicate data into a newer
version; that could make the outage time pretty short.

But PostgreSQL 7.1.1 is *very* old.
-- 
(reverse (concatenate 'string ofni.secnanifxunil @ enworbbc))
http://cbbrowne.com/info/slony.html
... the open research model is justified. There is a passage in the
Bible (John 8:32, and on a plaque in CIA HQ), And ye shall know the
truth, and the truth shall set ye free. -- Dave Dittrich 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] [pgsql-advocacy] [PERFORM] Postgres VS Oracle

2007-06-19 Thread Chris Browne
[EMAIL PROTECTED] (Carol Walter) writes:
 I don't want to add gas to the flamewar, but I gotta ask.  What is in
 the the 90 to 95% referred to in this email.

I'd say, look at the Oracle feature set for things that it has that
PostgreSQL doesn't.

Four that come to mind:

- ORAC = multimaster replication
- Integration with hardware vendors' High Availability systems
- Full fledged table partitioning
- Windowing functions (SQL:2003 stuff, used in OLAP)

These are features Truly Needed for a relatively small percentage of
systems.  They're typically NOT needed for:

 - departmental applications that operate during office hours
 - light weight web apps that aren't challenging the limits of
   the most expensive hardware
 - any application where reliability requirements do not warrant
   spending $1M to make it more reliable
 - applications that make relatively unsophisticated use of data
   (e.g. - it's not worth the analysis to figure out a partitioning
   design, and nobody's running queries so sophisticated that they
   need windowing analytics)

I expect both of those lists are incomplete, but those are big enough
lists to, I think, justify the claim, at least in loose terms.

The most important point is that third one, I think: 
  any application where reliability requirements do not warrant
  spending $1M to make it more reliable

Adopting ORAC and/or other HA technologies makes it necessary to spend
a Big Pile Of Money, on hardware and the humans to administer it.

Any system whose importance is not sufficient to warrant *actually
spending* an extra $1M on improving its reliability is *certain* NOT
to benefit from either ORAC or HA, because you can't get any relevant
benefits without spending pretty big money.  Maybe the number is lower
than $1M, but I think that's the right order of magnitude.
-- 
output = reverse(ofni.secnanifxunil @ enworbbc)
http://linuxdatabases.info/info/nonrdbms.html
One disk to rule them all,  One disk to find  them. One disk to bring
them all and in the darkness grind them. In  the Land of Redmond where
the shadows lie. -- The Silicon Valley Tarot Henrique Holschuh

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] [pgsql-advocacy] [PERFORM] Postgres VS Oracle

2007-06-19 Thread Chris Browne
[EMAIL PROTECTED] (Achilleas Mantzios) writes:
 I don't want to add gas to the flamewar, but I gotta ask.  What is in
 the the 90 to 95% referred to in this email.

 short answer: all cases, possibly except when running a Bank or something 
 similar.

No, it's not to do with what enterprise you're running; the question
is what functionality is missing.

At the simplest level, I'd say that there are Oracle (+DB2) feature
sets that *are compelling*, particularly in the High Availability
area.

However, those feature sets are ones that require spending a Big Pile
Of Money (BPOM) to enable them.  

For instance, ORAC (multimaster replication) requires buying a bunch
of servers and spending a BPOM configuring and administering them.

If you haven't got the BPOM, or your application isn't so mission
critical as to justify budgeting a BPOM, then, simply put, you won't
be using ORAC functionality, and that discards one of the major
justifications for buying Oracle.

*NO* small business has that BPOM to spend on this, so *NO* database
operated by a small business can possibly justify buying Oracle
because of ORAC.

There will be a lot of departmental sorts of applications that:

- Aren't that mission critical

- Don't have data models so sophisticated as to require the features
  at the edges of the big name commercial DBMSes (e.g. - partitioning,
  OLAP/Windowing features) that PostgreSQL currently lacks
   
and those two categorizations, it seems to me, likely define a
frontier that allow a whole lot of databases to fall into the don't
need the Expensive Guys region.
-- 
cbbrowne,@,cbbrowne.com
http://www3.sympatico.ca/cbbrowne/oses.html
Rules of the Evil Overlord #219. I will be selective in the hiring of
assassins.   Anyone who  attempts to  strike down  the hero  the first
instant his back is turned will not even be considered for the job.
http://www.eviloverlord.com/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] Size of logfiles

2007-05-21 Thread Chris Browne
[EMAIL PROTECTED] (Peter Elmers) writes:
 I have searched the web and am sorry to say that i found nothing
 about the default behavior of the log files of postgres.

 Can anyone tell me when postgres will delete the logfiles or what
 postgres automatically do with them?

Can you be a bit more precise about that?

There are the WAL log files; I expect you're not talking about that.

I expect you're talking about the log files containing error messages,
warnings, and such.

PostgreSQL will not, itself, purge out such files; consider that you
can configure it to distribute log information via syslog, which would
push the logs into a mostly OS-managed area.  

You'll presumably want to figure out what YOU want your policy to be,
and find suitable tools (lots available) to implement that policy...
-- 
(reverse (concatenate 'string ofni.secnanifxunil @ enworbbc))
http://linuxfinances.info/info/linuxxian.html
We should start referring to processes which run in the background by
their correct technical name:... paenguins.  -- Kevin M Bealer

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] Fwd: File systems linux !!!

2007-04-30 Thread Chris Browne
[EMAIL PROTECTED] (jose fuenmayor) writes:
 Hi all, i need to install a database in postgres with high reading
 rates, which is the best file system to do that , I am working on
 linux.  thanks in advance 

For read rates, it should be fairly irrelevant what filesystem you
use.  Access times will be dominated by reading the data, not in
processing filesystem metadata, if the filesystem is *at all*
competently implemented.

If you have any reason to consider one fileystem more *reliable* than
another, THAT would be the reason to prefer one over another.
-- 
output = (cbbrowne @ cbbrowne.com)
http://linuxdatabases.info/info/fs.html
[LINK FROM XGP]

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] slony with diff PostgreSQL versions

2007-03-22 Thread Chris Browne
[EMAIL PROTECTED] (Ezequiel Luis Pellettieri) writes:
 Hi guys, I know I'm out of topic, but have you ever tried using
 Slony with PostgreSQL 8.2x as master and PostgreSQL 7.4.x as slave?

I've done it with 7.4.x as master, and 8.1.x as slave, the point of
the exercise being that I wanted to upgrade the master node to 8.1
I'd call that pretty routine...
-- 
let name=cbbrowne and tld=cbbrowne.com in name ^ @ ^ tld;;
http://cbbrowne.com/info/finances.html
Where do you  *not* want to go today?  Confutatis maledictis, flammis
acribus addictis (http://www.hex.net/~cbbrowne/msprobs.html

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] Slony sync times

2007-03-16 Thread Chris Browne
Zoomby Woof [EMAIL PROTECTED] writes:
 I'm using slony1 version 1.2.1 and I'm trying to figure out what the
 actual default of the 'sync_interval' parameter is. The manual says
 100 ms in one place, and 10 seconds on another...some other places on
 the net says 60 seconds, I have even seen 2 seconds being mentioned as
 the default value for this parameter. My gut feeling is that it is 100
 ms, I haven't set this value explicitly, and stuff seems to replicate
 pretty quick so...I haven't made any scientific tests on this though.

 Anyone knows the truth ?

 Also, are there any other dangers in lowering this number to say, 50
 ms, or even 10 ms, except the fact that the machine will be more
 loaded ? We have an application where we loadbalance stuff and
 occasionally we run into sync problem (the subscriber havent been
 updated fast enough)

There are two sync parameters:

- sync_interval
- sync_interval_timeout

The first one represents how often the slon will consider generating a
SYNC event.  In confoptions.c/confoptions.h (it moved from .h to .c in
version 1.2), the default value is 2000ms.

The second one represents how often a SYNC will get generated anyways,
even if there are no updates taking place that would cause it to
decide to generate a SYNC.  The default is 2 minutes, which seems
pretty reasonable.

The event loop will wake up every (sync_interval) ms, and check to see
if there has been any data collected to replicate.  If so, it will
generate a SYNC, and thus allow other nodes to start pulling data.  If
not, then it'll go back to sleep.  If it reaches the
sync_interval_timeout time without having yet generated a SYNC, it
will do a SYNC even though there have been no changes, so that the
systems can be aware that they're at least *trying* to replicate, even
though your application isn't giving Slony-I any work to do.

Dropping sync_interval to 100ms will mean it generates 20x as many
SYNCs (increasing the workload, a cost...), and that it can consider
replicating data 1/20th as soon (a benefit).

That may very well be a good trade-off for you.  I'd be surprised if
dropping the time to 10ms gave a material further improvement; I'd
think that would add more work than benefit.  But you can feel free to
prefer otherwise, and you won't be wrong, by your metrics.
-- 
cbbrowne,@,linuxdatabases.info
http://www3.sympatico.ca/cbbrowne/multiplexor.html
Rules of the Evil Overlord #206. When my Legions of Terror park their
vehicle  to do  reconnaissance on  foot,  they will  be instructed  to
employ The Club. http://www.eviloverlord.com/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] postgres in HA constellation

2006-10-13 Thread Chris Browne
[EMAIL PROTECTED] (Brad Nicholson) writes:
 On Wed, 2006-10-11 at 16:12 -0500, Jim C. Nasby wrote:
 On Wed, Oct 11, 2006 at 10:28:44AM -0400, Andrew Sullivan wrote:
  On Thu, Oct 05, 2006 at 08:43:21PM -0500, Jim Nasby wrote:
   Isn't it entirely possible that if the master gets trashed it would  
   start sending garbage to the Slony slave as well?
  
  Well, maybe, but unlikely.  What happens in a shared-disc failover is
  that the second machine re-mounts the same partition as the old
  machine had open.  The risk is the case where your to-be-removed
  machine hasn't actually stopped writing on the partition yet, but
  your failover software thinks it's dead, and can fail over.  Two
  processes have the same Postgres data and WAL files mounted at the
  same time, and blammo.  As nearly as I can tell, it takes
  approximately zero time for this arrangement to make such a mess that
  you're not committing any transactions.  Slony will only get the data
  on COMMIT, so the risk is very small.
  
 Hrm... I guess it depends on how quickly the Slony master would stop
 processing if it was talking to a shared-disk that had become corrupt
 from another postmaster.

 That doesn't depend on Slony, it depends on Postgres.  If transactions
 are committing on the master, Slony will replicate them.  You could have
 a situation where your HA failover trashes some of you database, but the
 database still starts up.  It starts accepting and replicating
 transactions before the corruption is discovered.

There's a bit of joint responsibility there.

Let's suppose that the disk has gone bad, zeroing out some index pages
for the Slony-I table sl_log_1.  (The situation will be the same for
just about any kind of corruption of a Slony-I internal table.)

There are two possibilities:
  1.  The PostgreSQL instance may notice that those pages are bad,
  returning an error message, and halting the SYNC.

  2.  The PostgreSQL instance may NOT notice that those pages are bad,
  and, as a result, fail to apply some updates, thereby corrupting
  the subscriber.

I think there's a pretty high probability of 1) happening rather than
2), but there is a risk of corruption of subscribers roughly
proportional to the probability of 2).

My gut feel is that the probability of 2) is pretty small, but I
don't have anything to point to as a proof of that...
-- 
output = reverse(gro.mca @ enworbbc)
http://www3.sympatico.ca/cbbrowne/
One of the main causes of the fall of the Roman Empire was that,
lacking zero, they had no way to indicate successful termination of
their C programs.  -- Robert Firth

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] Recursive use

2006-10-06 Thread Chris Browne
[EMAIL PROTECTED] (Alexander Burbello) writes:
 Hi people,

 I need to know if Postgres do recursive search and how can I do!
 I will explain my problem.


 table COOPERATIVE
  code_cooperative int
  code_coo_father int

 I can have 3 level by business rules

 1 - Father
 - 2 - Children
 - 3 - Grandchildren


 I would like to have a query asking who is father and granfather
 select grandfather, father from COOPERATIVE where COD_COOPERATIVE = 3

 Do the Postgres can solve this problem?
 Could anybody help me?

There was a proposal to implement WITH RECURSIVE for PostgreSQL 8.2;
that fell by the wayside.

The task is on the ToDo list:

http://www.postgresql.org/docs/faqs.TODO.html
Add SQL:2003 WITH RECURSIVE (hierarchical) queries to SELECT

At present, you may simulate this by writing a pl/pgsql function that
does the recursion in procedural code.
-- 
let name=cbbrowne and tld=linuxdatabases.info in name ^ @ ^ tld;;
http://cbbrowne.com/info/postgresql.html
Nondeterminism means never having to say you're wrong.  -- Unknown

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] best OS and HW for postgreSQL

2006-09-25 Thread Chris Browne
[EMAIL PROTECTED] (Joshua D. Drake) writes:

 Scott Marlowe wrote:
 On Sun, 2006-09-24 at 20:18 -0400, Jim Nasby wrote:
 On Sep 22, 2006, at 6:12 PM, Raul Retamozo wrote:
 Hi everyone on the list. I want to know what is the reccommended
 OS  to work with PostgreSQL , on specific with PostGIS:
 One more question is about what HW (server) offers the best
 performance for a Web Map Server bases on PostGIS and mapserver.
 In general, you're probably best off running whatever OS you're
 most  comfortable with.
 I'd amend that to say whatever flavor of unix you're most comfortable
 with.

 Well honestly that isn't true either. 

No, I think it's still true.

It seems to me that you're better off using an OS that you're
competent with than leaping out of the region of competence.

Is it better to get a 15% speedup, but then have the system fall over
because you don't know how to keep it up to date with patches for
recent CERT reports?  In many cases, I wouldn't think so...
-- 
output = (cbbrowne @ linuxdatabases.info)
http://cbbrowne.com/info/languages.html
str-str_pok |= SP_FBM; /* deep magic */
s = (unsigned char*)(str-str_ptr); /* deeper magic */
-- Larry Wall in util.c from the perl source code

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Security Concerns over User 'postgres'

2006-09-22 Thread Chris Browne
[EMAIL PROTECTED] (Lane Van Ingen) writes:
 Looked at /etc/shadow, and (in fact) it doesn't have a password, so I was
 wrong about that. 

 Tried to use the login command to login directly log into postgres, but for
 some reason could not do that on RHEL 4.0 either. So, like you said, I am
 not certain that I have a vulnerability here at all, other than su-ing from
 root.

I'm certain; you do NOT have a vulnerability there, if there is no
password in /etc/shadow.  (Well, barring stupidity like dramatic
misconfiguration of PAM to accept logins without passwords :-).)
-- 
(format nil [EMAIL PROTECTED] cbbrowne cbbrowne.com)
http://linuxdatabases.info/info/finances.html
Rules of the Evil Overlord #10.  I will not interrogate my enemies in
the inner sanctum  -- a small hotel well outside  my borders will work
just as well. http://www.eviloverlord.com/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] Database Optimization and Peformance

2006-08-30 Thread Chris Browne
[EMAIL PROTECTED] (Joe McClintock) writes:
 My concern is this, doing a backup and restore does not seem an
 appropriate way manage database fragmentation and performance. The
 documentation I have read indicates that vacuum, analyze reindex are
 the tools to use to de-fragment and optimize the database. In my
 case they did not work and reindexing made query performance
 slightly worse. Am I missing something? As the database grows, will
 I need to essentially rebuild the db on a regular basis?

It oughtn't be necessary.

It seems quite plausible that there are a few tables that are not
being vacuumed nearly often enough.

If you have tables where large portions are modified
(updated/deleted), then you need to run VACUUM quite frequently,
otherwise such tables will be overrun with dead space.

We have tables that we run VACUUM on every five minutes because they
change really heavily.  (200-300 tuples, where we do an update to a
tuple every time a customer is billed.)
-- 
cbbrowne,@,ntlug.org
http://cbbrowne.com/info/finances.html
Why does sour cream have an expiration date? 

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] Changing a database owner on postgres 7.3

2006-07-17 Thread Chris Browne
[EMAIL PROTECTED] (Devrim GUNDUZ) writes:
 Hi,

 On Mon, 2006-07-17 at 12:33 +, Richard George wrote:
 I've run into trouble attempting to change the owner of a particular
 database. The offending line is -
  
 ALTER DATABASE test OWNER TO epg;

 There is no way to change ownership of a database with ALTER DATABASE in
 7.3.X . 

 Can someone suggest an alternative way of changing the owner of a
 database for 7.3? 

 AFAIR there is no quick-and-easy way to do that for 7.3. I'd create a
 new database with a new username, and I'd use the old database as the
 template:

 CREATE DATABASE newdb WITH OWNER newuser TEMPLATE olddb;

 could work for you.

I expect you could do the following:

   update pg_database set datdba = (select usesysid from pg_shadow
   where usename = 'epg') where datname = 'test';

Behind the scenes, that's probably what ALTER DATABASE test OWNER TO
epg; actually does.
-- 
output = reverse(gro.mca @ enworbbc)
http://www.ntlug.org/~cbbrowne/emacs.html
Catapultam  habeo!  Nisi  pecuniam omnem  mihi dabis,  ad  caput tuum
saxum immane mittam !! (I have a  catapult!  If you do not pay me the
money you owe me, I will hit you with a big rock !!)
-- Simon Gornall [EMAIL PROTECTED]

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] Where are the temporary work / sort files please

2006-07-13 Thread Chris Browne
[EMAIL PROTECTED] (adey) writes:
 ...but I can't find pgsql_tmp in my v8.1.4 installation.

 Where can I find these temp files please?

They get created on demand, and go away when that demand disappears.

Run a big REINDEX and you'll see them pop into place...
-- 
output = (cbbrowne @ ntlug.org)
http://cbbrowne.com/info/sap.html
...Roxanne falls in love with Christian, a chevalier in Cyrano's
regiment who hasn't got the brains God gave an eclair...
-- reviewer on NPR

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] 8.1 Unique Index Issue/Bug???

2006-07-13 Thread Chris Browne
[EMAIL PROTECTED] (Joshua D. Drake) writes:
 That is why IS NULL can be placed in a unique index because it isn't
 actually there (in the theorectical sense).

In effect, NULL is not actually a value.
-- 
(reverse (concatenate 'string gro.mca @ enworbbc))
http://www.ntlug.org/~cbbrowne/sap.html
Has anyone ever thought about the fact that in general, the only web
sites that are consistently making money are the ones dealing in
pornography?  This brings new meaning to the term, obscene
profits. :)  -- Paul Robinson [EMAIL PROTECTED]

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] Database port changing issue

2006-06-09 Thread Chris Browne
[EMAIL PROTECTED] (Guido Barosio) writes:
 That's because your listen_addresses parameter points to localhost,
 and that means that your will listen for connections through unix
 sockets instead of via TCP/IP.

 In order to change this, you'll need to replace the string localhost
 with the proper value. (A bad habit, to replace the 'localhost' with a
 '*').

 Read the man page :)

  # - Connection Settings -
  listen_addresses = 'localhost'  # what IP address(es) to listen on;

 # Would be something like:
 listen_addresses = '*'

We found that listen_addresses='*' turned out fairly badly on AIX
5.3...

When we did so, the stats collector wouldn't start up, and there would
be complaints about trying to open port 5432 multiple times.

This appeared to relate to IPv6; evidently, PostgreSQL would try to
open up a socket on both the IPv4 and IPv6 addresses, and this somehow
conflicted.

Interestingly, 'localhost' would also fall prey to this; evidently
that can be multiply interpreted in both IPv4 and IPv6.  '

We had no difficulties when we set up a list of specific IPv4
addresses...
-- 
select 'cbbrowne' || '@' || 'cbbrowne.com';
http://cbbrowne.com/info/internet.html
When ever in doubt consult a song. --JT Fletcher 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] Synchronize Backup to another remote database

2006-05-22 Thread Chris Browne
[EMAIL PROTECTED] (Jim C. Nasby) writes:
 On Tue, May 16, 2006 at 09:35:57AM +0800, [EMAIL PROTECTED] wrote:
 Currently we want to upgrade postgresql to 8.1.3 but our current postgresql
 database (v7.2.2) is use in the production line so we can't stop the
 current database.
 So is there any way to synchronize backup the current postgresql database
 to another remote postgresql database without interrupt our current
 postgresql database?

 You could maybe, possibly, theoretically hack Slony to run against 7.2.
 I know someone did this to get slony to run on 7.3. You'd want to use
 an old version of Slony, which means you'd only be able to go from 7.2
 to 7.4. But once you were on 7.4, you could use the most recent Slony to
 go from 7.4 to 8.1.

Actually, the hack was on version 7.2.

---
http://linuxfinances.info/info/faq.html#AEN44224

 This is approximately what you need to do:

* Take the 7.3 templates and copy them to 7.2 -- or otherwise hardcode the 
version your using to pick up the 7.3 templates
* Remove all traces of schemas from the code and sql templates. I basically 
changed the . to an _.
* Bunch of work related to the XID datatype and functions. For example, 
Slony creates CASTs for the xid to xxid and back -- but 7.2 cannot create new 
casts that way so you need to edit system tables by hand. I recall creating an 
Operator Class and editing several functions as well.
* sl_log_1 will have severe performance problems with any kind of data 
volume. This required a number of index and query changes to optimize for 7.2. 
7.3 and above are quite a bit smarter in terms of optimizations they can apply.
* Don't bother trying to make sequences work. Do them by hand after the 
upgrade using pg_dump and grep. 

Of course, now that you have done all of the above, it's not
compatible with standard Slony now. So you either need to implement
7.2 in a less hackish way, or you can also hack up slony to work
without schemas on newer versions of PostgreSQL so they can talk to
each other.

Almost immediately after getting the DB upgraded from 7.2 to 7.4, we
deinstalled the hacked up Slony (by hand for the most part), and
started a migration from 7.4 to 7.4 on a different machine using the
regular Slony. This was primarily to ensure we didn't keep our system
catalogues which had been manually fiddled with.

All that said, we upgraded a few hundred GB from 7.2 to 7.4 with about
30 minutes actual downtime (versus 48 hours for a dump / restore
cycle) and no data loss.
---

I'm not sure what to suggest with regards to Slony-I versioning; newer
versions have cleaned out some bugs, so I'd tend to think that version
1.1.5 might be the best starting point.

But it's possible that there have been changes between the 1.0 and 1.1
series that would make life more difficult for users of PostgreSQL
7.2...

After fighting through this, it would be worthwhile keeping the
modified code and SQL scripts; that could make life easier for others
that want to do this.  We'd not be keen on letting this code into the
main line, but if it could help others running elderly versions,
having some sort of release for 7.2 would be nice...

 Another possibility is using a command-based replicator, like
 pgmirror, or what Continuent offers. But you need to understand the
 drawbacks of such a method.

Indeed...
-- 
(reverse (concatenate 'string moc.enworbbc @ enworbbc))
http://cbbrowne.com/info/languages.html
Much of this software was user-friendly, meaning that it was intended
for users who did not know anything about computers, and furthermore
had absolutely no intention whatsoever of learning.
-- A. S. Tanenbaum, Modern Operating Systems, ch 1.2.4

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] autovacuum for large periodic deletes

2006-05-17 Thread Chris Browne
[EMAIL PROTECTED] (Sriram Dandapani) writes:
 Every night, a delete of about 50 million rows occurs on a table. Do
 we need to explicitly vacuum the table or will autovacuum handle the
 large deletes without affecting performance. I am trying to
 determine whether or not I should completely rely on autovacuum or
 also tinker with manual vacuums.:p

I'd be inclined to submit a vacuum on the table as part of the
process.

Moreover, I'd be inclined to consider this situation to perhaps be
something of a bug.  Deleting enormous amounts of data from a table,
on a regular basis, is not something I would expect would work out
well.
-- 
(reverse (concatenate 'string moc.enworbbc @ enworbbc))
http://cbbrowne.com/info/postgresql.html
We  are MICROS~1.   You will  be assimilated.   Resistance  is futile.
(Attributed to B.G., Gill Bates)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] autovacuum for large periodic deletes

2006-05-17 Thread Chris Browne
[EMAIL PROTECTED] (Sriram Dandapani) writes:
 About 5-10 million rows stay after deletion. There are a few other
 tables where the daily deletion totals about 3-6 million. 

It would appear there is something fairly not-sane about the process,
then.  You delete about 90% of the day's data from the table each day?
That's a *lot*, and you are quite likely to have trouble with this
table blowing out the Free Space Map as a result.

 Would a vacuum full/cluster affect other operations. These tables
 have a 24x7 high data insertion rate.

Yes, VACUUM FULL and CLUSTER would block other operations while they
run.

The real Right Answer probably involves having data flow into some
sort of queue table, created fresh each day, for that day's
activities, where, at the end of the day, all of the data either gets
purged or moved to the final destination table, so that a new table
can be created, the next day.
-- 
cbbrowne,@,cbbrowne.com
http://cbbrowne.com/info/unix.html
CBS News report on Fort Worth tornado damage:
Eight major downtown buildings were severely damaged and 1,000 homes
were damaged, with 95 uninhabitable.  Gov. George W. Bush declared
Tarrant County a disaster area.  Federal Emergency Management Agency
workers are expected to arrive sometime next week after required
paperwork is completed.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] Logging long queries: not all get logged

2006-05-10 Thread Chris Browne
[EMAIL PROTECTED] writes:
 Hi,

 I'm running PG 8.0.3.  I'm trying to catch slow queries, so I have this in 
 postgresql.conf:

   # log queries that take more than 500 ms
   log_min_duration_statement = 500   # in ms

 This does log _some_ queries that take  500 ms to run.
 However, it looks like not all queries get logged!

 And I think I spotted a pattern:

 Queries that DO get logged are:
   - queries run using psql
   - queries run by the java app that uses JDBC to talk to my PG database

 Queries that do NOT get logged are:
   - queries run by the java app that uses Hibernate to talk to my PG database

We occasionally have run into seemingly the same issue.

Question:

Are you certain that the queries are taking longer than 500ms for
PostgreSQL to process them?

Or are you merely certain that Hibernate is reporting that it took
longer than that for *it* to process them?

We have had Java applications which would report spurious slow
queries any time the garbage collector had to do any significant
amount of work.

It could be that the garbage collector is causing Hibernate to stall
while processing its logging, thereby incorrectly reporting that
database queries are running slow...
-- 
(reverse (concatenate 'string gro.gultn @ enworbbc))
http://cbbrowne.com/info/finances.html
The only problem
with Haiku is that you just
get started and then 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] Best filesystem

2006-02-20 Thread Chris Browne
[EMAIL PROTECTED] (Arnau Rebassa Villalonga) writes:
Which is the best filesystem in linux for postgresql? nowadays I'm
 using ext3, I don't know if other filesystems like XFS,
 reiser... would be better from the performance point of view.

As far as I can tell, ext3 is the best *supported* filesystem for
Linux, and support trumps performance 99 days out of 100.

The trouble with all the other filesystems is that vendors are likely
to throw up their hands and ignore you if you report problems with
them, and not getting your database eaten by a filesystem error is
*WAY* more important than eking out some fraction of a percent
improvement in write performance.
-- 
output = (cbbrowne @ cbbrowne.com)
http://cbbrowne.com/info/linuxxian.html
Heavy music didn't start in  Seattle.  It started in Katy, Texas with
King's X -- Jeff Ament/Pearl Jam

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] How to VACUUM this table? 998994633 estimated total rows

2006-02-13 Thread Chris Browne
[EMAIL PROTECTED] (Guido Barosio) writes:
 quote:  If you are quite sure it has few if any dead tuples, it might be
 something to try to avoid VACUUMing except as needed to evade the 2^31
 transaction limit...
 You may use the pg_stattuple software, included in the /contrib . This will 
 show you the current scenery, and whether you shall clean or not dead tuples.

The trouble with pg_stattuple() is that running it is virtually as
expensive as running the vacuum.  For a bit table, you pay all the I/O
cost, and any costs of the super-long-running-transaction and don't
even get any cleanup for that cost.
-- 
let name=cbbrowne and tld=cbbrowne.com in name ^ @ ^ tld;;
http://cbbrowne.com/info/lsf.html
Philosophy is a game with objectives and no rules.
Mathematics is a game with rules and no objectives. 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] How to VACUUM this table? 998994633 estimated total rows

2006-02-11 Thread Chris Browne
[EMAIL PROTECTED] (Aldor) writes:
 I'm just curious about a VACUUM plan of this table:

 1st) When to do VAUUM?
 2nd) How often to do VACUUM?
 3rd) With which postgresql.conf paremeters to set up vacuum?
 4th) When to do a VACUUM FULL?
 5th) Can autovacuum of 8.1 be used in this case?

 I'm a little bit afraid about the size of the table, but I think
 somebody should have a solution...

 Here is the complete ANALYZE output of the table:

 INFO:  tbl1: scanned 300 of 27744713 pages, containing 10802 live rows
 and 0 dead rows; 300 rows in sample, 998994633 estimated total rows

 The size of the data is 340 GB, this are 40% of the disk-array.

Vacuuming this table is likely to take a rather long time.  Hours and
hours; possibly multiple days.

I don't think you'll *ever* want to VACUUM FULL this table; I'm not
sure you ever want autovacuum to process it either.

I instead think you want to choose a time which seems best to start a
Very Long Transaction to issue a VACUUM ANALYZE on it.

If you are quite sure it has few if any dead tuples, it might be
something to try to avoid VACUUMing except as needed to evade the 2^31
transaction limit...

I am not sure that's the only opinion you ought to consider on it...
-- 
(reverse (concatenate 'string moc.enworbbc @ enworbbc))
http://cbbrowne.com/info/sap.html
Why are there interstate highways in Hawaii? 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] Is the database being VACUUMed?

2006-02-10 Thread Chris Browne
[EMAIL PROTECTED] (C. Bensend) writes:
 Hey folks,

I'm running 8.0.4 on OpenBSD, and I'm running into issues where
 a large batch job will fire up and occasionally coincide with a
 VACUUM.  Which then makes this batch take an hour and a half,
 rather than the 30 minutes it usually takes.  :(

I am going to upgrade to 8.1.2 this weekend, before you ask.  :)

Here's the question - is there a query I can make in my scripts
 (using perl/DBD::Pg) to see if the database is being VACUUMed at
 the current time?  I could add a sleep, so the script would
 patiently wait for the VACUUM to finish before kicking off several
 intense queries.

Thanks much!

If you have command string monitoring turned on, via
stats_command_string in the postgresql.conf file, then you could get
this information from the system view pg_stat_activity.

Generally, you could look to see if a current_query is a vacuum, perhaps via...

select * from pg_stat_activity where lower(current_query) like 'vacuum%' ;

If that parameter is not turned on, then ps auxww | egrep [something
finding your PG processes] | grep VACUUM could perhaps do the trick,
albeit not from a straightforward database query...
-- 
output = (cbbrowne @ acm.org)
http://www.ntlug.org/~cbbrowne/advocacy.html
Is your pencil Y2K certified?  Do you know the possible effects if it
isn't?

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] autovacuum

2006-02-01 Thread Chris Browne
matthew@zeut.net (Matthew T. O'Connor) writes:
 Hope that helps.  Real world feed-back is always welcome.

While I'm at it, I should throw in an idea that I had a little while
back about a vacuum request manager.

This is kind of orthogonal to everything else that has been happening
with pg_autovacuum...

One of the troubles we have been hitting with our homebrew scripts is
when locking doesn't turn out, and they start submitting multiple
vacuums at once, which sometimes builds up to ill.

A thought I had was to create a daemon that would serially process
requests.  It would just watch a table of requests, and when it finds
work, start work.

We'd then have some sort of injection process that would tell the
daemon Here's new work!

Requests would be defined thus:

/* [EMAIL PROTECTED]/dba2 vacdb=*/ \d vacuum_requests
  Table public.vacuum_requests
Column|   Type   |   Modifiers
--+--+
 vtable   | text | not null
 vhost| text | not null
 vdatabase| text | not null
 urgency  | integer  | not null default 1
 created_on   | timestamp with time zone | not null default now()
 completed_on | timestamp with time zone | 
 failed_at| timestamp with time zone | 
Indexes:
vacuum_requests_pkey primary key, btree (vtable, vhost, vdatabase, 
created_on)
vr_priority btree (vhost, vdatabase, urgency) WHERE ((completed_on IS 
NULL) AND (failed_at IS NULL))

/* [EMAIL PROTECTED]/dba2 vacdb=*/ \d vacuum_start
   Table public.vacuum_start
Column|   Type   |   Modifiers
--+--+
 vtable   | text | not null
 vhost| text | not null
 vdatabase| text | not null
 started_on   | timestamp with time zone | not null default now()
 completed_on | timestamp with time zone | 
Indexes:
vacuum_start_pkey primary key, btree (vtable, vhost, vdatabase, 
started_on)

/* [EMAIL PROTECTED]/dba2 vacdb=*/ \d vacuum_failures
 Table public.vacuum_failures
   Column   |   Type   |   Modifiers
+--+
 vtable | text | not null
 vhost  | text | not null
 vdatabase  | text | not null
 started_on | timestamp with time zone | not null
 failed_on  | timestamp with time zone | not null default now()
Indexes:
vacuum_failures_pkey primary key, btree (vtable, vhost, vdatabase, 
started_on)


This has a bit more generality than would be needed for handling just
one postmaster; host/database would allow this to be used to manage
multiple backends...

We have, in our kludged-up scripts, three levels of granularity:

 1.  There are tables we vacuum every few minutes; they would be at
 urgency 1; every few minutes, we would, in effect, run the query...

 insert into vacuum_requests (vtable, vhost, vdatabase, urgency)
   select t.fqtablename, h.hostname, tld.name, 1
  from urgent_tables t, all_hosts h, all_tlds tld;

 2.  Then, there are hourly tables, at urgency level 2.

 Once an hour, we run:

 insert into vacuum_requests (vtable, vhost, vdatabase, urgency)
   select t.fqtablename, h.hostname, tld.name, 2
  from hourly_tables t, all_hosts h, all_tlds tld;

 3.  Once a day, we'd do something kind of like:

 insert into vacuum_requests (vtable, vhost, vdatabase, urgency)
   select table_schema || '.' || table_name, h.hostname, tld.name, 3 
 from information_schema.tables, all_hosts h, all_tlds tld 
 where table_type = 'BASE TABLE' and table_schema in ('public', 
'pg_catalog');

The event loop for the daemon would be to look up the highest priority
table, and add an entry to vacuum_start.

Then it vacuums the table.

If that succeeds, the table is marked as complete in both
vacuum_start, and, FOR EVERY ENTRY CURRENTLY OUTSTANDING, in
vacuum_requests.  Thus, if a table is queued up 20 times, it will be
vacuumed once, and marked as done 20 times.

If that fails, all the relevant entries in vacuum_start and
vacuum_requests are marked with the failure information, and a record
is added to the failures table.

We're putting this off, pending the thought that, with 8.1, it's worth
testing out pg_autovacuum again.

The above is an in-the-database way of queueing up requests,
associating priorities to them, and having the queue be
administrator-visible.

We were anticipating using our present quasi-kludgy scripts to add our
favorite tables to the queue; it would seem a nice/natural thing for
there to be some automatic process (ala the pg_autovacuum daemon) that
could add things to the queue based on its knowledge of updates.

My thought is 

Re: [ADMIN] database replication

2006-01-25 Thread Chris Browne
[EMAIL PROTECTED] (Ciprian Hodorogea) writes:
 :o=urn:schemas-microsoft-com:office:office 
 xmlns:w=urn:schemas-microsoft-com:office:word
 xmlns=http://www.w3.org/TR/REC-html40;

 Hi All,:p

 :p 

   Is there a stable solution for database replication with Postgres 8.1.x?:p

Slony-I 1.1.5 should be released later today ;-).
-- 
cbbrowne,@,ntlug.org
http://cbbrowne.com/info/slony.html
If you were plowing a field, which  would you rather use?  Two strong
oxen or 1024 chickens?  -- Seymour Cray

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] [Slony1-general] Blueprints for High Availability

2006-01-20 Thread Chris Browne
[EMAIL PROTECTED] (Jim C. Nasby) writes:
 dons Nomex undies
 Well, I would generally have to agree on not using Slony 1 for HA. I
 don't see how it could be considered acceptable to potentially lose
 committed transactions when the master fails. Unless maybe my
 understanding of Slony is flawed...

Well, that presumably depends on perspective.

A bank generally cannot ever afford to lose ANY transactions, which
would tend to mean that only synchronous replication would be any kind
of answer.

That kind of application points to really forcibly needing 2PC, which
doesn't tend to play well across WAN links.

Maximizing availability, which is what HA is forcibly and
unambiguously about (High Availability), is NOT exactly the same
thing as providing guarantees that committed transactions can never
be lost.

- HA, in the context of DNS services, may not have any transactional
  nature to it; you might well want to have several DNS servers kicking
  around so that if one falls over, you don't have to notice.  That does
  not really imply anything about how you update your DNS configuration.

- HA, in the context of running your corporate web server, may just 
  involve having several web servers, any of which can take over upon
  failure of other web servers.

  Updating the static bits of those web servers might well be done by
  taking them out of service, one by one, and copying the new data
  into place; again, no transactional issue there at all.

Those are both reasonable examples of applications where one might
want to use HA; neither involve transactional guarantees *at all*.

I don't think Slony-I is the *only* tool one would want to use to
improve availability; if you do have bank-like can't lose
transactions requirements, that might well rule it out.  Of course,
if those are the requirements, there may be a whole lot of possible
mechanisms that are ruled out.
-- 
(reverse (concatenate 'string moc.enworbbc @ enworbbc))
http://www.ntlug.org/~cbbrowne/emacs.html
Rules of the Evil Overlord #113.  I will make the main entrance to my
fortress  standard-sized. While  elaborate  60-foot high  double-doors
definitely impress  the masses, they are  hard to close  quickly in an
emergency. http://www.eviloverlord.com/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] Backup - filesystem snapshots

2006-01-18 Thread Chris Browne
[EMAIL PROTECTED] (Chris Jewell) writes:
 I'm trying to implement a backup strategy for a research database in
 order to prevent again users accidentally dropping their data.

 My preferred method would be to create regular snapshots of the data
 directory, and then send this to the backup server using rsync, with
 hard-linking backup rotation.  The backup data directories could then be
 examined using a postmaster running on the backup server to extract any
 accidentally deleted tables.

 My problem is how to do these snapshots: is it enough to create a hard
 link to the directory, or is there still a risk that a currently running
 transaction might introduce inconsistencies?  I guess I could use the
 pg_ctl -m 'Smart' command to stop the database after all clients have
 disconnected, but I sometimes have users leaving their clients connected
 all night.  Is there any other way to suspend the postmaster such that
 it finishes its current transaction and queues any other transactions
 while the snapshot is taking place?  Any other ideas of how I can create
 such snapshots?

Short answer to is there a risk? is You betcha!

There's a fairly new feature called Point in Time Recovery that is
specifically designed to address those risks.

http://www.postgresql.org/docs/8.1/static/backup-online.html
-- 
let name=cbbrowne and tld=ntlug.org in String.concat @ [name;tld];;
http://cbbrowne.com/info/advocacy.html
The English exam was a piece  of cake---which was a bit of a surprise,
actually, because I was expecting some questions on a sheet of paper.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] How to find data directory

2005-12-18 Thread Chris Browne
[EMAIL PROTECTED] (Ken Hill) writes:
 That works nice. It shows my data directory in
 '/var/lib/postgresql/7.4/main'. When I do an ls command, I get:

 $ sudo ls /var/lib/postgresql/7.4/main
 basepg_clog  pg_ident.conf  pg_xlog  postmaster.opts
 root.crt
 global  pg_hba.conf  PG_VERSION postgresql.conf  postmaster.pid

 I have created two database named 'testdb' and 'csalgorithm'. How do I
 find these databases? I was expecting the databases to be subdirectories
 under the database directory (e.g. /var/lib/postresql/7.4/main/testdb
 and /var/lib/postgresql/7.4/main/csalgorithm).

They'll be in a subdirectory; the thing is, tables and databases are
not named _by name_.

The filenames are the OID numbers associated with either databases
(see pg_catalog.pg_database) or tables (see pg_catalog.pg_class).
-- 
let name=cbbrowne and tld=ntlug.org in String.concat @ [name;tld];;
http://cbbrowne.com/info/unix.html
The Linux  philosophy is laugh in  the face of  danger.  Oops.  Wrong
One.  'Do it yourself.'  That's it.  -- Linus Torvalds

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Please guide me.

2005-12-17 Thread Chris Browne
[EMAIL PROTECTED] (Praveen Kumar (TUV)) writes:
 I have install postgresql on Redhat AS 4.Now we want to migrate data
 from oracle 8i to postgresql 8.1 and  want to create snapshots ,
 materialized view,dblink,taking online backup,logical backup and
 establish replication  between three production server.Plz tell me
 which documents,links and books I should follow for getting
 knowledge

For many of these things, you are using what appears to be terminology
*very* specific to Oracle.  PostgreSQL isn't Oracle, so things are not
implemented the same, and certainly don't follow Oracle's use of
terminology.

For instance, I can think of several possible meanings for
snapshots.  And it seems obvious that no one would be such an idiot
as to ask for an illogical backup.

There is a section in the documentation on backups; you should avail
yourself of that, immediately, as it explains various sorts of backup
machinery that is available.

As for replication, if you need single master to multiple slave
asynchronous replication, see http://slony.info/.  The 1.1.5 release
of Slony-I, which supports PostgreSQL 8.1, should be available in the
coming week.

You will find it more useful to describe what you *want*.
-- 
let name=cbbrowne and tld=cbbrowne.com in String.concat @ [name;tld];;
http://cbbrowne.com/info/slony.html
Signs of a Klingon Programmer - 3. This  machine is GAGH! I need dual
Pentium processors if I am to do battle with this code!

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] the best linux or bsd for postgresql

2005-12-09 Thread Chris Browne
[EMAIL PROTECTED] (Ing. Jhon Carrillo // Caracas, Venezuela) writes:
 hi,
 What is the best linux or bsd for postgresql 8.x?
 i'm replacing a oracle system on suse enterprise 8 with clustering, 
 dataguard, ..
 I need high and good performance for my system.
 Is there any certified unix plataform for to use postgresql?

If you are well familiar with some particular flavour of Linux, then
you will likely have an easier time making it perform well than
jumping to some other OS where files are in different places and the
knobs you have to tune are different from the ones you already know
about.

If you are well familiar with some particular flavour of BSD, then you
will likely have an easier time making it perform well than jumping to
some other OS where files are in different places and the knobs you
have to tune are different from the ones you already know about.

The learning curve is entirely likely to be expensive enough as to
outweigh any kinds of short term benefits you might find.
-- 
let name=cbbrowne and tld=cbbrowne.com in String.concat @ [name;tld];;
http://cbbrowne.com/info/rdbms.html
I am not a Church numeral!
I am a free variable!

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] restore challenge

2005-11-17 Thread Chris Browne
codeWarrior [EMAIL PROTECTED] writes:
 it doesnt work that way this is postgreSQL -- a relational database. 
 you can't just copy the data directory like you can with mySQL or DBase, or 
 BTrieve or any of the old-school databases... if you need to backup and 
 restore data you need to look at pg_dump and pg_dumpall and the import 
 statements in psql...

Actually, that's wrong.

You certainly *can* copy the set of data directories; there is no
particular magic here.

The data is stored as a set of files in a set of directories; for
there to be some other magical something else would seem very
strange.

What you generally can't do safely is to take a copy while the
postmaster is up and running, as you risk parts of the data not being
consistent with one another.

If your OS or disk hardware supports some sort of snapshotting
technology, so that you can grab a copy of the whole thing as an
instant atomic operation, that provides a way to grab a copy while
postmaster is running.  If not, then you have to take a copy while the
postmaster is shut down.

The resulting filesystem copy will only be compatible with the same
major version of PostgreSQL, and with versions running on the same
architecture, but that can still let it be useful.
-- 
(format nil [EMAIL PROTECTED] cbbrowne ntlug.org)
http://cbbrowne.com/info/oses.html
in your opinion which is the best programming tools ?
The human brain and a keyboard. -- Nathan Wagner

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] Scanners connected to the database

2005-10-31 Thread Chris Browne
[EMAIL PROTECTED] (Tharo Mohono) writes:
 I have a small company and I am quite new in using
 postgres, I have established that I can intall
 postgres on a database server and connect to it from
 terminals all over, and I just wanted to find out if I
 can have plugins to the server as well, like scanners,
 printers, and such.

PostgreSQL shouldn't need to care terribly much what additional
hardware you may have plugged into the server.
-- 
output = reverse(gro.mca @ enworbbc)
http://cbbrowne.com/info/spreadsheets.html
Rules of the Evil Overlord #117. No matter how much I want revenge, I
will never order an underling: ``Leave him.  He's mine!''
http://www.eviloverlord.com/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] excluding a table from pg_dump

2005-10-25 Thread Chris Browne
[EMAIL PROTECTED] writes:
 [EMAIL PROTECTED] wrote on 10/25/2005 02:31:51 AM:
 It's possible with pg_restore to exclude that table on /restore/, but
 that's not what you asked for..

 On 25.10.2005 11:08, Gourish Singbal wrote:
  pg_dump does not allow you to accomplish this task for sure.
 
  On 10/20/05, Colton A Smith [EMAIL PROTECTED] wrote:
  I don't think this is possible, but I want to make sure. Let's say I
  have a database that I want to dump in its entirety, all except for one
  particular table. Is there an option flag on pg_dump that accomplishes
  this?

 Here is an option: Dump into a test database, drop the table in the
 test db  dump to file: pg_dump originaldb | psql testdb 
 droptable.sql; pg_dump testdb  dumpfile

A usual reason why I would want to dump everything except for a few
tables is that those few tables are particularly enormous and
particularly useless (at least, for the purposes of my dump).

In that case, the LAST thing I want to do is to make extra copies of
the *useless* tables.

By the way, it is quite likely that the above approach would be more
quickly accomplished via using originaldb as a template for testdb.

Thus...

$ createdb --template=originaldb testdb
$ for tables in t1 t2 t3 t4 t4; do
 psql -d testdb -c drop table public.t1;
done
$ pg_dump testdb  dumpfile
$ dropdb testdb
-- 
cbbrowne,@,cbbrowne.com
http://www3.sympatico.ca/cbbrowne/finances.html
Wow! You read  advocacy groups once in a  while, thinking you'll find
the occasional gem, but when you  unearth the Taj Mahal you still have
to stand back and gape a little. -- Paul Phillips [EMAIL PROTECTED]

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] Replication Solutions for PostgreSQL Master to Slave

2005-10-21 Thread Chris Browne
[EMAIL PROTECTED] (Joshua D. Drake) writes:
 You are looking for synchronous replication as opposed to async?
 
 Currently i have Slony working, but am not satisfied with how it
 accomplishes replication, or it's interface, and am curious what
 others are using to accomplish replication?

 There is also Mammoth Replicator:

 http://www.commandprompt.com/products/mammothreplicator

 It is a commercial product (I work for CMD) but it is widely used
 in the corporate environment and you may find it a little easier
 to manage.

A vital difference is that Slony-I extracts replication information
(e.g. - determines what it needs to replicate) via triggers on the
tables, whereas Mammoth Replicator takes the (quite common in the
industry) approach of reading update information out of the
transaction logs.

There are a number of differences between the approaches, but in the
conversations with people at OSCON, it seemed surprisingly common for
the similarities to make it seem that when Slony-I was inadequate,
Mammoth Replicator would be too.

Of course, this particular discussion hasn't extracted enough
information about the dissatisfaction to evaluate much of anything...
-- 
output = (cbbrowne @ cbbrowne.com)
http://cbbrowne.com/info/spiritual.html
There was a young lady of Crewe
Whose limericks stopped at line two. 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Solving OID wrap-around in 7.4 DB?

2005-10-17 Thread Chris Browne
[EMAIL PROTECTED] (Jeff Boes) writes:
 On behalf of a previous employer who, due to major downsizing, is left
 without an answer to this:

 What's the best way to treat impending OID wrap-around in a 7.4.8
 database? This DB has been online for about 18 months, and is expected
 to hit the dreaded wrap-around in about a month. At an application
 level, there's some small chance that OID wrap will cause failures:
 there's code that writes rows, and (for tables without primary keys)
 relies on DBD::Pg's pg_oid_status method to identify the last INSERT.
 (The vast majority of the code does not rely on this, but there's a tiny
 fraction that does, and rewriting the code to remove this is not an
 option due to the aforementioned downsizing.)

 My immediate take on this is that the only sure way to avoid the
 problem would be to dump and reload the database. Due to the size of
 the database, this is likely to take most of a day. Any other
 options?

Another method would be to use replication to load the data into a
fresh DB instance.

You could, in principle, use Slony-I to do this; that is an intended
sort of use case.
-- 
output = (cbbrowne @ acm.org)
http://www3.sympatico.ca/cbbrowne/slony.html
What if you slept?  And what if, in your sleep, you dreamed?
 And what if, in your dream, you went to heaven and there
 plucked a strange and beautiful flower?  And what if, when
 you awoke, you had the flower in your hand?  Ah, what then?
--Coleridge

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] backup software for postgresql

2005-10-05 Thread Chris Browne
[EMAIL PROTECTED] (Robert Ngo) writes:
 Which backup software out there work well with postgresql, I cannot
 find one with backup agent for postgres.:p

You don't need one.

You can simply schedule a job that runs pg_dump every so often
(daily is common) to save data to a file.  Most competent backup
systems can backup data in files, so can pull the pg_dump output...
-- 
let name=cbbrowne and tld=ntlug.org in name ^ @ ^ tld;;
http://www3.sympatico.ca/cbbrowne/backup.html
sic transit discus mundi
-- From the System Administrator's Guide, by Lars Wirzenius

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] postgresql cluster on SAN

2005-09-26 Thread Chris Browne
[EMAIL PROTECTED] (Robert Ngo) writes:
 How about i have one SAN storage for the slony master node and
 another SAN to be share by slave nodes that only do SELECT?

That seems like a reasonable sort of idea...

 Can Dell|EMC AX100i iscsi san with sata drive serve as storage for
 postgresql server, will there be problem in I/O performance?

I would imagine that the SAN would provide better I/O performance than
having a couple of IDE drives; whether you'll have a problem in I/O
performance depends heavily on what kind of load the system will
experience.
-- 
let name=cbbrowne and tld=acm.org in String.concat @ [name;tld];;
http://www3.sympatico.ca/cbbrowne/rdbms.html
Linux is only free if your time has no value.  -- Jamie Zawinski

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] Performance question: Restore + INDEX

2005-09-25 Thread Chris Browne
[EMAIL PROTECTED] (Aldor) writes:
 I'm curious how other people do it:

 What is faster?

 1. CREATE TABLE
 2. restore data
 3. CREATE INDEX

 or

 1. CREATE TABLE
 2. CREATE INDEX
 3. restore data

Creating the index AFTER loading the data is definitely faster.  But
by all means do your own tests; it's pretty easy to do so and satisfy
yourself...
-- 
(format nil [EMAIL PROTECTED] cbbrowne ntlug.org)
http://cbbrowne.com/info/languages.html
It worked about as well as sticking a blender in the middle of a lime
plantation and hoping they'll make margaritas out of themselves.
-- Frederick J. Polsky v1.0

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] postgresql cluster on SAN

2005-09-21 Thread Chris Browne
[EMAIL PROTECTED] (Robert Ngo) writes:

 Robert Ngo wrote:

 Chris Browne wrote:

 [EMAIL PROTECTED] (Robert Ngo) writes:


 Can i create create a cluster of postgresql server by sharing the
 database file on a SAN? I am also looking into slony but slony
 replicate
 data to each server and my database will potentially have terabytes of
 data. I am thinking about a solution where a cluster of database server
 will share database files on SAN, can this be done? I am also looking
 for a load balancing salution for the postgresql database. Any
 sugestion?



 What you wish can NOT be done.

 There can only be ONE postmaster for each database cluster; you can
 NOT have multiple servers sharing a single cluster.

 so there is no ways for the file storage to be shared between
 postgres cluster? Then i will need to add a huge amount of storage
 every time new server is added.

 Is there any way i can reduce the storage requirement of the database
 server cluster?

Well, the base amount of storage for each instance is only about 26MB.

sh-3.00$ /opt/OXRS/dbs/pgsql8/bin/initdb -D .
Success. You can now start the database server using:

/opt/OXRS/dbs/pgsql8/bin/postmaster -D .
or  
/opt/OXRS/dbs/pgsql8/bin/pg_ctl -D . -l logfile start

sh-3.00$ du
176 ./global
4   ./pg_xlog/archive_status
16412   ./pg_xlog
12  ./pg_clog
12  ./pg_subtrans
4640./base/1
4704./base/17229
9348./base
4   ./pg_tblspc
25992   .
sh-3.00$ 

For any sort of interesting-sized database, this bit of overhead,
which is mostly WAL files, doesn't strike me as being terribly
material.  I can't see 26MB being considered huge in a SAN context.

You only need additional storage (beyond that) when your applications
are actually requesting to store data.  There is no magical way to
avoid that need for additional storage.
-- 
let name=cbbrowne and tld=acm.org in name ^ @ ^ tld;;
http://cbbrowne.com/info/languages.html
The newsreader abuse likely  stems from more fundamental, than merely
just the  UI, design disagreements. Requests from  Unix programmers to
replicate  Free Agent  rightfully so  should trigger  the  throwing of
sharp heavy objects at the requesting party.
-- [EMAIL PROTECTED] (jedi)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] postgresql cluster on SAN

2005-09-20 Thread Chris Browne
[EMAIL PROTECTED] (Robert Ngo) writes:
 Can i create create a cluster of postgresql server by sharing the
 database file on a SAN? I am also looking into slony but slony replicate
 data to each server and my database will potentially have terabytes of
 data. I am thinking about a solution where a cluster of database server
 will share database files on SAN, can this be done? I am also looking
 for a load balancing salution for the postgresql database. Any sugestion?

What you wish can NOT be done.

There can only be ONE postmaster for each database cluster; you can
NOT have multiple servers sharing a single cluster.
-- 
(reverse (concatenate 'string gro.gultn @ enworbbc))
http://cbbrowne.com/info/internet.html
The real  romance is   out   ahead and   yet to come.The computer
revolution hasn't started yet. Don't be misled by the enormous flow of
money into bad defacto standards for unsophisticated buyers using poor
adaptations of incomplete ideas. -- Alan Kay

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] Disabling WAL for bulk data loads

2005-09-09 Thread Chris Browne
[EMAIL PROTECTED] (Kevin Keith) writes:
 I am coming from an Oracle background - which in the case of bulk data
 loads there were several options I had where I could disable writing
 to the redo log to speed up the bulk data load (i.e. direct load, set
 the user session in no archive logging, set the affected tables to
 have no logging).

 I know the COPY command is one option - however it appears the data
 would need to be in formatted file in order to use it correct? I want
 to avoid writing a new file out for the COPY command and loading that.

The options get a bit better in 8.1, as the COPY command supports some
more format options.

But you'll DEFINITELY get a big boost if you can format things so COPY
can work for you.

 What other options does Postgres 7.4 provide which would allow data
 loads to bypass writing to the WAL? I don't need to have this enabled
 - because in the event of a database crash, I would simply reload the
 data from the source files as recovery.

Arguably you could turn fsync = off in postgresql.conf; there is
regrettably the increased risk that a crash would trash the ENTIRE
database, not just the recently-loaded data.

It's somewhat regrettable that you can't turn off logging just for a
single transaction's context; that would probably be the ideal for
this...
-- 
output = (cbbrowne @ cbbrowne.com)
http://cbbrowne.com/info/spreadsheets.html
If roach hotels worked on pointy haired people, Microsoft would die.
-- Pete Koren

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] triggers

2005-09-08 Thread Chris Browne
[EMAIL PROTECTED] (Kailash Vyas) writes:
 hi
 I have a query regarding triggers.
 On INSERT, UPDATE, DELETE I want to fire a trigger to relect it to another 
 database on postgres sever.
 how can I do this operation.
 Thanks in advance,
 Kailash Vyas

If you wish the trigger to result in some action taking place on 
another database, then there are two major methodologies available: 
 
 1.  The trigger could RAISE an event. 
 
 A LISTENing daemon would see that event, and do something in the 
 other database. 
 
 2.  The trigger could write data into a replicated table. 
 
 The replication system (possibly Slony-I?) would then copy 
 the replicable data to the other database. 

-- 
cbbrowne,@,cbbrowne.com
http://cbbrowne.com/info/slony.html
Rules of the Evil Overlord #52. I will hire a team of board-certified
architects and  surveyors to  examine my castle  and inform me  of any
secret passages  and abandoned tunnels  that I might not  know about.
http://www.eviloverlord.com/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] replication

2005-09-02 Thread Chris Browne
[EMAIL PROTECTED] (Alain Rodriguez Arias) writes:
 Please,anybody can tell me how to make replication of a PostGre DB

What do you mean by replication?

- If what you desire is to create a copy, then you can take a backup
  and restore it somewhere else.

  Read The Fine Manual if that fits your need.
  http://www.postgresql.org/docs/current/static/backup.html

- If what you require instead is that the replica mirror the activity
  that takes place on the database in which you perform updates, then
  you should probably look into Slony-I.  URL to documentation for that
  below.
-- 
output = reverse(moc.enworbbc @ enworbbc)
http://cbbrowne.com/info/slony.html
Rules of the Evil Overlord #67.  No matter how many shorts we have in
the system, my  guards will be instructed to  treat every surveillance
camera malfunction as a full-scale emergency.
http://www.eviloverlord.com/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Poolin' it

2005-06-28 Thread Chris Browne
[EMAIL PROTECTED] (Kris Kiger) writes:
 Howdy all.  I'm doing some research on 'middleware' type connection
 pooling, such as pgpool.  I'm having some trouble finding other
 options that are actively being maintained, whether it be by the
 open source community or not.  Can anyone point me to some other
 resources or ideas for connection pooling?  Thanks a lot!

You're probably noticing the effect that if there is a project that is
suitable for enough cases, this can discourage there from being other
similar projects.

Those that have been interested in connection pools have been looking
at pgpool, and finding it significantly better than any of the other
options that there were out on the information stuporhighway.

It would take quite a lot of work to create an alternative to pgpool,
and it is quite likely that it would be less work to fix misfeatures
of pgpool.  Ergo...  This leads to not terribly much call for other
systems...
-- 
(format nil [EMAIL PROTECTED] cbbrowne acm.org)
http://www.ntlug.org/~cbbrowne/sap.html
Rules of the Evil Overlord #78.  I will not tell my Legions of Terror
And he must  be taken alive! The command will be:  ``And try to take
him alive if it is reasonably practical.''
http://www.eviloverlord.com/

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] Vacuum full takes forever

2005-06-15 Thread Chris Browne
[EMAIL PROTECTED] (Pieter-Jan Savat) writes:
 I'm facing the following problem.

 I have a postgres 8.0 DB with a table 'results' containing 6.000.000
 records.
 This table has 16 indexes. Each one basically created to speed up
 different queries.

 Because of some glitch in the system there has never been a VACUUM
 FULL on this table.
 When I try to do a full vacuum (on a dual-processor, 2GB RAM, ...) it
 takes forever. I started the
 vacuum at 6pm and 15 hours later it was still going on.
 Just before starting vacuum full, I did a vacuum analyze (which took
 about 15 minutes). I also
 checked the amount of diskspace used for the indexes = 33% of 11
 available GigaBytes.
 After killing the vacuum full my diskspace for the indexes has
 increased to 41% of the 11 available GB.

 So does anyone know what I can do to fully vacuum my table? Or to at
 least decrease the amount of diskspace used?

Two choices offer themselves:

1.  Drop all indices.
Then VACUUM FULL the table.
Then recreate the indices.

2.  CLUSTER the table based on one of the indices.

None of this is going to be pretty; it'll take hours.

1. and 2. are nearly equivalent; the conspicuous difference is that
1. will give you feedback along the way as it completes one step or
another.
-- 
(format nil [EMAIL PROTECTED] cbbrowne acm.org)
http://www.ntlug.org/~cbbrowne/sap.html
Rules of the Evil Overlord #78.  I will not tell my Legions of Terror
And he must  be taken alive! The command will be:  ``And try to take
him alive if it is reasonably practical.''
http://www.eviloverlord.com/

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [ADMIN] Remote Access

2005-06-14 Thread Chris Browne
Marcos [EMAIL PROTECTED] writes:
 I have one server postgresql in my home computer, and now I have the
 necessity of access it of my work, the question is. How I liberate
 the access remote of postgresql with password?

Is there some reason why Chapter 19 in the documentation, entitled
Client Authentication, did not provide enough information?

If you haven't consulted the documentation, may I suggest that you
consider looking there first?
-- 
(format nil [EMAIL PROTECTED] cbbrowne acm.org)
http://www.ntlug.org/~cbbrowne/sap.html
Rules of the Evil Overlord #78.  I will not tell my Legions of Terror
And he must  be taken alive! The command will be:  ``And try to take
him alive if it is reasonably practical.''
http://www.eviloverlord.com/

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [ADMIN] possible os recommendations

2005-05-25 Thread Chris Browne
[EMAIL PROTECTED] (David Bear) writes:
 wanting to avoid religious warfare, I'm curious if there may be some
 who would have recommendations for what operating system is 'best' to
 run postgresql 8.x on.. I've used postgresql 7.x on FreeBSD.  It was
 already. But, I'm getting sucked into the linux world and have found
 Suse 9.3 a pretty good distro.

 Since I've chosen a packaged distro, I realize that I depend upon
 'them' to keep me updated with criticals and other bug fixes wrt
 postrgresql.

 That may be consider bad by some, good by others.

 I'm more concerned though about the complete experience, i.e.
 postgresql performance on Linux vs Bsd vs Windows...
 Maintainability.. Stability... os tweeks that were really needed to
 make pg work well, etc.

 I would really like to hear from those who have used postgresql on
 multiple operating systems..

You're going to be fundamentally better off if you go with a system
that you can more readily support than one that is unfamiliar to you.

In particular, if you want to use more or less esoteric hardware
features like in-OS RAID or volume management systems, or to tweak
filesystem parameters, you're going to have better luck with a system
you're already familiar with.

The one clear quasi-religous argument to be made is that it is
highly preferable to head to some Unix-like system for production as
opposed to Windows, as there simply isn't the long history of
Windows-based deployments to nail down OS-related issues.
-- 
(format nil [EMAIL PROTECTED] cbbrowne acm.org)
http://www.ntlug.org/~cbbrowne/sap.html
Rules of the Evil Overlord #78.  I will not tell my Legions of Terror
And he must  be taken alive! The command will be:  ``And try to take
him alive if it is reasonably practical.''
http://www.eviloverlord.com/

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] Sidetracking pg_autovacuum

2005-05-18 Thread Chris Browne
[EMAIL PROTECTED] (Jeff Boes) writes:
 (Environment: Pg 7.4.1 on Linux; yes, I know it's past time to
 upgrade.)

If you were running 7.2, I'd be pestering you ;-).

We didn't get rid of our last 7.1 system until last December, and I
think the backend is probably still running, albeit with nothing
hooking up to it :-).

 (Originally, I started to post this as a question about how
 pg_class.reltuples could get so far out of whack compared to the
 actual table size. After posting it, I found that (a) I had the
 mailing list address wrong [d'oh!], and (b) I think I solved my
 problem. In essence, by doing a VACUUM in a script after a sizeable
 DELETE, apparently I took the affected table out of the hands of
 pg_autovacuum so that it never, ever did anything with the
 table. Including ANALYZE. Thus, reltuples never got updated.)

Oops.  Change that to a VACUUM ANALYZE (in your script), and things
should work out better.

 I started tracking the COUNT(*) (actual row count) versus the value
 in pg_class.reltuples for a number of our larger tables. Some of the
 tables see a lot of INSERTs and DELETEs over the course of a day; as
 much as 1/12th of the data will be deleted overnight, and new data
 inserted over the course of the day. I have pg_autovacuum running,
 and I also do regular VACUUMs, ANALYZEs, and even some CLUSTERs on
 these tables.

 [N.B.: In fact, I started doing a VACUUM explicitly after the big
 nightly DELETE.]

That's not a bad idea, all in all.

 What would cause an otherwise well-behaved table to start doing
 this? Is this just a dead spot in the ANALYZE command? (By which I
 mean: ANALYZE randomly sampling rows, but my data is not terribly
 random, so it gets fooled?)

Yes, that doubtless would be it.  There's a new sampling scheme in
version 8 that may do better.

 [And here's the remaining question in my puzzled mind: ANALYZE would
 not change the reltuples value, but VACUUM FULL ANALYZE
 did. Er-wha?]

VACUUM ANALYZE would also change the reltuples value...
-- 
(format nil [EMAIL PROTECTED] cbbrowne acm.org)
http://www.ntlug.org/~cbbrowne/sap.html
Rules of the Evil Overlord #78.  I will not tell my Legions of Terror
And he must  be taken alive! The command will be:  ``And try to take
him alive if it is reasonably practical.''
http://www.eviloverlord.com/

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] how do i kill user sessions?

2005-05-11 Thread Chris Browne
[EMAIL PROTECTED] (Scott Marlowe) writes:

 On Wed, 2005-05-11 at 15:08, Ing. Jhon Carrillo wrote:
 I have a problem  with the users administration. When I want to erase
 (drop) some databases there's an error: ** database name_db is being
 accessed by other users.**  I want to kill the user sessions conected
 but i don't know how to do it   (Kill the user sessions).

 First, use ps to find the pid of the offending users.  for instance:

 ps ax|grep postgres|grep test
 18925 pts/1S  0:00 postgres: postgres test [local] idle

 then, as the postgres superuser, kill the backend:

 kill 18925

You may want to be careful about what signal you submit.

If you do kill -9, for instance, that may be a more severe
thwacking of the system than you intended.

Hitting it with kill -2 is usually the right answer; that's SIGINT.

Other signals may cause the database to kill ALL the backends.  
kill -3, for instance...

kill -9 will terminate the whole database system, 'with extreme
prejudice,' which will be quite likely to turn out badly :-(.
-- 
(format nil [EMAIL PROTECTED] cbbrowne acm.org)
http://www.ntlug.org/~cbbrowne/sap.html
Rules of the Evil Overlord #78.  I will not tell my Legions of Terror
And he must  be taken alive! The command will be:  ``And try to take
him alive if it is reasonably practical.''
http://www.eviloverlord.com/

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] [GENERAL] Storing database in WORM devices

2005-05-11 Thread Chris Browne
[EMAIL PROTECTED] (Juan Miguel Paredes) writes:
 I know it would be a hard approach but... perhaps ON DELETE and ON
 UPDATE triggers would help?

No, that's not even related to the real problem.

The problem is that the ability to have transactions is deeply
pervasive, and requires a writable store even though you don't imagine
you're doing updates to the data.

Version 8 probably moves this closer to reality with the addition of
tablespace support.  Using that, you could take finished tables, and
put them into a particular tablespace.  VACUUM FREEZE them to pretty
well eliminate the need to touch them again.  Then take that
tablespace offline, turn it into a CDROM, and mount it back in the
same location.

If you do all that, and make the entire tablespace read-only, I could
imagine it sorta working.  Though I wouldn't want to bet money on the
outcome, at this point...
-- 
(format nil [EMAIL PROTECTED] cbbrowne acm.org)
http://www.ntlug.org/~cbbrowne/sap.html
Rules of the Evil Overlord #78.  I will not tell my Legions of Terror
And he must  be taken alive! The command will be:  ``And try to take
him alive if it is reasonably practical.''
http://www.eviloverlord.com/

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [ADMIN] Database Encryption (now required by law in Italy)

2005-04-29 Thread Chris Browne
pgman@candle.pha.pa.us (Bruce Momjian) writes:

I owe you a patch on runtime.sgml for the summary I did on Use of
Encryption.

I just sent it...
-- 
(format nil [EMAIL PROTECTED] cbbrowne acm.org)
http://www.ntlug.org/~cbbrowne/sap.html
Rules of the Evil Overlord #78.  I will not tell my Legions of Terror
And he must  be taken alive! The command will be:  ``And try to take
him alive if it is reasonably practical.''
http://www.eviloverlord.com/

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] Another Data directory for second database - feasible?

2005-04-22 Thread Chris Browne
[EMAIL PROTECTED] (Tomeh, Husam) writes:
 I have a postgres installation with one database on a server. I'd need
 to create another database; however, I'm curious if there's a way to
 create the second database in a different directory than the existing
 one. Is that feasible and how? Or does postgres only point to one data
 directory?  Thanks in advance.

We have plenty of servers with multiple databases, handled in just
this fashion.

At the simplest level, it involves running initdb three times, on
three directories:

$ initdb -D /opt/DB1
$ initdb -D /opt/DB2
$ initdb -D /opt/DB3
$ vi /opt/DB?/postgresql.conf
[change port numbers for each]
$ for db in 1 2 3; do
  pg_ctl -D /opt/DB$db start
done

You probably want to set up some set of init scripts that manage
telling the instances where to stow log files and such, and which can
be auto-started upon system bootup.

One of my coworkers has built a really nice init script that's good at
building the directory structure where the DBs will be stowed, hence...

$ vi init_db3.sh
[modify config]
$ ./init_db3.sh mkdir
[builds directory structure]
$ ./init_db3.sh initdb
[runs initdb for you]
$ ./init_db3.sh start

At one point, he was trying to come up with a way of releasing it to
the community; hopefully poking Drew can lead to that coming
available, someday...
-- 
(format nil [EMAIL PROTECTED] cbbrowne acm.org)
http://www.ntlug.org/~cbbrowne/sap.html
Rules of the Evil Overlord #78.  I will not tell my Legions of Terror
And he must  be taken alive! The command will be:  ``And try to take
him alive if it is reasonably practical.''
http://www.eviloverlord.com/

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] vacuumdb -v output

2005-04-21 Thread Chris Browne
[EMAIL PROTECTED] (Kevin Copley) writes:
 Hi,

 I've just put a system into production in which some tables are updated 
 frequently - several times per
 second.

 I'm doing a nightly vacuumdb -v, but am not sure if it's achieving anything. 
 Here's the output for one
 table:

  

 

 INFO:  vacuuming public.fip_track_circuit
 INFO:  index fip_track_circuit_pk now contains 1557427 row versions in 4538 
 pages
 DETAIL:  10 index row versions were removed.
 0 index pages have been deleted, 0 are currently reusable.
 CPU 0.22s/0.14u sec elapsed 6.51 sec.
 INFO:  fip_track_circuit: removed 10 row versions in 9 pages
 DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.
 INFO:  fip_track_circuit: found 10 removable, 1557427 nonremovable row 
 versions in 14305 pages
 DETAIL:  1555321 dead row versions cannot be removed yet.
 There were 1 unused item pointers.
 0 pages are entirely empty.
 CPU 0.42s/0.24u sec elapsed 6.82 sec.
 INFO:  vacuuming public.fip_xl_switch


 

  

 I'm concerned about  1555321 dead row versions cannot be removed yet

  

 I'd be extremely grateful if anyone could cast some light on this.

That sounds pretty bad...

It seems likely to me that you have some idle transaction hanging
around that is preventing VACUUM from doing anything useful about
cleaning out that table.

If you have command statistics turned on, you might try the query:

select * from pg_stat_activity where current_query  'IDLE';

If you see some items that indicate 'IDLE in transaction' that are
rather old, that's a nice smoking gun to indicate where the problem
lies.

Long running transactions are the bane of our existence, and are an
all too common result of buggy connection pool implementations :-(.

Go thump the offending connection, and you should see things clear out.

After that, you may need to do a VACUUM FULL to get things totally
cleared out, and probably a REINDEX after that...
-- 
(format nil [EMAIL PROTECTED] cbbrowne acm.org)
http://www.ntlug.org/~cbbrowne/sap.html
Rules of the Evil Overlord #78.  I will not tell my Legions of Terror
And he must  be taken alive! The command will be:  ``And try to take
him alive if it is reasonably practical.''
http://www.eviloverlord.com/

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] Many persistant client connections

2005-03-30 Thread Chris Browne
[EMAIL PROTECTED] writes:
 i would like to know if there is something like the Oracle shared
 server possible with PostgreSQL. I have searched the archives but
 have not found any real discussion regarding this.

 We have two CTI applications opening 2-3 persistant connections to
 the address server (PostgreSQL). With 100 clients we get around 400
 persistant connections eating up the server memory and have nothing
 to work most of the time.

Sounds like a case for looking into pgpool...

  http://pgfoundry.org/projects/pgpool/

pgpool is a connection pool system implemented in C that can allow
sharing of connections for many cases like the one you describe.
-- 
(format nil [EMAIL PROTECTED] cbbrowne cbbrowne.com)
http://www3.sympatico.ca/cbbrowne/postgresql.html
The  present  need for  security  products far exceeds  the number of
individualscapable ofdesigning  secure  systems. Consequently,
industry  has resorted to  employing folks  and purchasing solutions
from vendors that shouldn't be let near a project involving securing a
system.  -- Lucky Green

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] multiple servers updating from/to one

2005-03-30 Thread Chris Browne
[EMAIL PROTECTED] (Marco Carvalho) writes:
 Hi, all

 I need to do this scenario:

 Web Application   Web Application Web Application
|  |   
   |
   pgsql server 1   pgsql server 2  pgsql server 3
|  |   
   |
|  |   
   |
| adsl  | adsl 
  | adsl
|  |   
   |
|  |   
   |
   ---
   |
  pgsql server (master)

 I need various servers in different locations running exactly some
 things, one Web Application and one pgsql server, all synchronizing
 with master server through unstable adsl connections.
 So, If I do one insert in server 1, this data goes to master and is
 available to other servers, and vice-versa, and if adsl connection
 broken, Web Application still working with local server and when
 connection is restored both databases are synchronized.
 Is this possible?
 If it's true, what tools I need?

The only system I am aware of that can support this sort of usage
scenario may be PeerDirect's replication system.

The replication systems traditionally available for use with
PostgreSQL do not directly support your desire for a multimaster
approach; systems like Slony-I, Mammoth Replicator, eRServer, and such
require that there be just one master system and that replicas be
read-only.

The Slony-II system being worked on is a multimaster system, but
requires synchronization of live servers so that it specifically
does NOT support the operate while disconnected, and merge in changes
upon return that you are suggesting.

There is _possibly_ a way to get Slony-I to do what you want, if your
application is suitably malleable...

In Slony-I, it is necessary for one and only one host to be the
master or origin for each table.  But it supports the notion of
having replication sets (e.g. - sets of tables that are being
replicated) that have differing origins.

Thus, organizing those sets into their own schemas, you might set
up...

create schema master;
create table master.gl ( stuff );
create table master.ar ( stuff );
create table master.ap ( stuff );
create table master.users ( stuff );
create table master.inventory ( stuff );

create schema app1;
create table app1.gl ( stuff );
create table app1.ar ( stuff );
create table app1.ap ( stuff );
create table app1.users ( stuff );
create table app1.inventory ( stuff );

create schema app2;
create table app2.gl ( stuff );
create table app2.ar ( stuff );
create table app2.ap ( stuff );
create table app2.users ( stuff );
create table app2.inventory ( stuff );

create schema app3;
create table app3.gl ( stuff );
create table app3.ar ( stuff );
create table app3.ap ( stuff );
create table app3.users ( stuff );
create table app3.inventory ( stuff );

You'd set up the respective servers as the origins for the four
replication sets that naturally fall out of this.

Some process on the master server would be responsible for
synchronizing the sets, perhaps by adding some additional table that
feeds back what the app servers should update...  

In effect:

  I pulled orders 123, 577, and 899 from app1, and put them into the
  master, so feel free to delete them...

  This would be encoded in a new master table, perhaps thus:

  insert into master.downstream_changes (server, table, id) values ('app1', 
'orders', '123');
  insert into master.downstream_changes (server, table, id) values ('app1', 
'orders', '577');
  insert into master.downstream_changes (server, table, id) values ('app1', 
'orders', '899');

Thus, the app1, app2, and app3 schemas would just contain transient
data that, once processed on the master, would get deleted.

Slony-I isn't particularly happy about hosts that stay disconnected a
lot of the time, so I'd hate to set this up...
-- 
(format nil [EMAIL PROTECTED] cbbrowne cbbrowne.com)
http://www3.sympatico.ca/cbbrowne/slony.html
The  present  need for  security  products far exceeds  the number of
individualscapable ofdesigning  secure  systems. Consequently,
industry  has resorted to  employing folks  and purchasing solutions
from vendors that shouldn't be let near a project involving securing a
system.  -- Lucky Green

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] Migration from 7.1.3. to 7.4.7.

2005-03-22 Thread Chris Browne
[EMAIL PROTECTED] (Scott Marlowe) writes:
 On Wed, 2005-03-16 at 06:07, Dr. Roger Wießner wrote:
 Can anyone tell me how to successfully migrate from 7.1.3. to 7.4.7.? I get 
 lots of errors...

 Try to have both installed at once (separate dirs, or separate machines)
 and dump the 7.1.3 db with the 7.4.7 pg_dump.  You might want to do a
 file system backup and an upgrade of 7.1.3 to the last version of 7.1.3.

 On rare occasions, an intermediate upgrade to something in between may
 help, but I've not used 7.1 in a VERY long time.

7.1 is _mighty_ old, which gives a pretty considerable chance of there
being data type changes that may break.

I seem to recall doing an upgrade of some databases to 7.2 and
discovering that the date type being used in the ancient version
(which might well have been 6.5 or 7.0, even older than 7.1!) not
existing in more modern versions.

We wound up splitting the conversion into two pieces:

 - A schema conversion, where the types of fields were updated to
   those available in more modern versions.  This is where most of
   the fiddling took place.

 - A data copy, where (if memory serves) we dumped the data as a set of
   INSERT statements rather than via COPY 'dumps'.

   Careful addition of BEGIN/COMMIT allowed the load to be acceptably
   fast, and the use of INSERT avoided differences between COPY
   formats...

I'd do some dry runs (where you don't expect the results to
necessarily be useful) to see what problems fall out.
-- 
let name=cbbrowne and tld=cbbrowne.com in String.concat @ [name;tld];;
http://www.ntlug.org/~cbbrowne/linuxxian.html
A VAX is virtually a computer, but not quite.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] Replication Options.

2004-10-29 Thread Chris Browne
[EMAIL PROTECTED] (Pallav Kalva) writes:
 I need to implement the following the replication scenario in
 postgres database. Here are the details of what we are looking to
 implement.

 Lets say I have 2 nodes Node1(US) and Node2(Canada),

 Node1: tableA , tableB
 Node2: tableB , tableA

 tableA in Node1 is the master and it should replicate only to slave
 tableA in Node2

 tableB in Node2 is the master and it should replicate only to slave
 tableB in Node1

 Can this be possible with any of the replication models available for
 Postgres 7.4/8.0 ?

This scenario would be supported by Slony-I; there are several ways to
accomplish it, probably the easiest being thus...

- Create two nodes, 1 and 2

- Create two replication sets - #1 and #2.

  - Set 1 has origin of node 1
  - Set 2 has origin of node 2

  - Add tablea to set 1
  - Add tableb to set 2

  - Subscribe node 2 to set 1
  - Subscribe node 1 to set 2

Either node can run PG 7.4 or 8.0...
-- 
let name=cbbrowne and tld=cbbrowne.com in String.concat @ [name;tld];;
http://slony.info
A VAX is virtually a computer, but not quite.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [ADMIN] HELP pleaseeee!!!

2004-08-25 Thread Chris Browne
[EMAIL PROTECTED] (andres barra) writes:
 hi, somebody can help me???

 I tried to create a new database but i can´t.

 psql: FATAL:  Database template1 does not exist in the system catalog.
 createdb: database creation failed

 I tried to make initdb in another directory and it shows me  Segmentation 
 Fault.

 how i do to restore the template1

You could restore template1 from template0...

$ createdb -T template0 template1

I would suggest figuring out first why template1 got trashed, as
that's a pretty severe problem.  It sounds to me as though something's
severely broken with your PostgreSQL binaries; you may want to check
the provenance of the code.

If you compiled it yourself, then checking where the compile has
gotten to is clearly in your hands; if it was installed in packaged
form, you should look into whether someone has corrupted it, and
how/why...
-- 
let name=cbbrowne and tld=cbbrowne.com in String.concat @ [name;tld];;
http://cbbrowne.com/info/finances.html
The human race  will decree from time to time:  There is something at
which it is absolutely forbidden to laugh.
-- Nietzche on Common Lisp

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] Pseudo-Off-topic-survey: Opinions about future ofPostgresql(MySQL)?

2004-08-16 Thread Chris Browne
[EMAIL PROTECTED] (Gregory S. Williamson) writes:
 b) Informix is now part of IBM and is likely to be replaced by DB2
 over the next few years ... any bets on whether parts of Informix /
 Illustra / etc. migrate into some variant of open source ?

It's pretty clear that IBM bought out Informix in order to get the
customers and to try move them over to DB2.  Open sourcing Informix
would undermine that effort, so I wouldn't expect to see it happen any
time soon.
-- 
(reverse (concatenate 'string moc.enworbbc @ enworbbc))
http://www3.sympatico.ca/cbbrowne/x.html
Don't panic.
-- The Hitchhiker's Guide to the Galaxy

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] Pseudo-Off-topic-survey: Opinions about future of Postgresql(MySQL)?

2004-08-16 Thread Chris Browne
   [EMAIL PROTECTED] (Enrique Arizn) writes:
 But in the last ten years, I have never once heard mention of
 Ingres in a commercial context.  I was aware of it via University
 Ingres and because of knowing a little history, both of which came
 from academia, not from the commercial world.
 
 Consider:
 - Monster.com shows 13 jobs mentioning Ingres;
 - PostgreSQL gets you 55 hits.

  Curious, my first post was in part motivated because I also use Job
 Searching engines to calculate the success of a product and I found
 Ingres was much more used in comercial deployments than
 Postgresql. In Jobserve.com:

  - Postgresql related jobs:  5 vacancies
  - Ingres related jobs: 55 vacancies
  - SAPDB/MaxDB related jobs: 0 vacancies

  Jobserve.com concentrates in European countries, and mainly around
 London financial World, so it looks in Europe Ingres in much more
 widely used while the opposite is true with Postgresql in the USA.

 Back to the Ingres question, it is _possible_ that the Ingres code
 base may be usable / maintainable / improvable.  It is by no means
 guaranteed that this is so.

  I think you are completly wrong in this point. 

Hmm?  What could conceivably have been wrong about what I wrote?

I didn't say that the code base was unmaintainable; I intentionally
waffled about the matter, so I _couldn't_ be wrong.

 - It's _possible_ that the Ingres code may prove to be fairly easy to
   maintain and enhance;

 - It's also possible that after a dozen years of past optimizations
   and people hacking on it, it is almost impossible to do so.

The latter was the case for Adabas-D, when it got open sourced, so
there certainly is precedent.  And the tough learning curve property
has been true for numerous software packages that have been released
in open source form.

I think it took about a year before people were able to do builds of
Mozilla, and even then, it was _seriously_ feature deficient because
open sourcing it required stripping a lot of stuff out, and there
was a hefty learning curve.

I would find it surprising for a mature software product like Ingres
to NOT be a challenge to would-be newcomers.

 One of the great things of Ingres with respect to its near/far
 future is that is a core element in more than 100 CA applications,
 where it comes blunded. So it makes lots of sense for CA not to drop
 it and continue to improve it so they don't get dependent on a
 Oracle 48.000$ licence/CPU that obiosly will more than double the
 final cost of many CA products. CA has nearly doubled the number of
 Ingres developers since it was first planned to opensource it
 (that's at least what CA proclaims) and they are working to port
 many of its products, right now tied to Oracle databases, to
 Ingres. That will means for CA dramatically reducing cost, and an
 instant grow of its client base.

If it wasreally such a great product, then why didn't they start
porting their Oracle-based products to use Ingres a year ago when they
could have gotten the benefit of charging hefty licensing fees for
Ingres as well?

And thedramatically reducing cost and instant grow of client base
are both illusions.

 1.  CA doesn't save money by porting their applications to run on
 Ingres; it _costs_ them money to do so.

 2.  CA doesn't instantly grow its client base, unless there is some
  magical reason to imagine that new customers will suddenly want
  to start buying products from CA because these products have
  been ported to run on Ingres.

  When I go to the Ingres website it gives me the impression is a
 project really alive, and of course I downloaded the Ingres
 documentation and found it better documented and up to date than the
 Postgresql one. A thing I really liked is that they constantly
 compare Ingres to Oracle and DB2 in the docs, emphasizing the points
 where Ingres is not yet as mature as their rivals (XML support for
 example). This is not a tipical behavior of a company that drop away
 a product in the opensource just because they make no more profit.

CA are pretty good at marketing, so I haven't the slightest bit of
trouble believing that they would be able to successfully give this
impression.

SAP AG did very similar things with SAP-DB, and that did not prevent
reality from being quite different from impressions.
-- 
cbbrowne,@,ntlug.org
http://cbbrowne.com/info/x.html
I'm sorry,  Mr.   Kipling, but you  just  don't know how to   use the
English Language.  -- Editor of the San Francisco Examiner, informing
Rudyard Kipling, who had one  article published in the newspaper, that
he needn't bother submitting a second, 1889

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [ADMIN] pg_hba.conf

2004-05-26 Thread Chris Browne
[EMAIL PROTECTED] (mike g) writes:
 I believe I found my problem.  The Cisco VPN client I use encrypts data
 at a 168 bit level.  Postgres only supports up to 128 bit correct?

That ought to be entirely irrelevant, as your VPN client would encrypt
all data going across the network, encrypted or not.  Indeed, if the
VPN is encrypting the data, it is redundant to have the database
server encrypt it an extra time.  

That will just make things perform poorly.

And if you have some network configuration problem, adding in extra
layers of encryption is unlikely to make it easier to solve the
problem...
-- 
(reverse (concatenate 'string gro.mca @ enworbbc))
http://cbbrowne.com/info/linux.html
Howe's Law:
Everyone has a scheme that will not work.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] Raw devices vs. Filesystems

2004-04-06 Thread Chris Browne
[EMAIL PROTECTED] (Gregory S. Williamson) writes:
 No point to beating a dead horse (other than the sheer joy of the
 thing) since postgres does not have raw device support, but ...  raw
 devices, at least on solaris, are about 10 times as fast as cooked
 file systems for Informix. This might still be a gain for postgres'
 performance, but the portability issues remain.

That claim seems really rather remarkable.

It implies an entirely stunning degree of inefficiency in the
implementation of filesystems on Solaris.

The amount of indirection involved in walking through i-nodes and such
is something I would expect to introduce some percentage of
performance loss, but for it to introduce overhead of over 900%
presumably implies that Sun (and/or Veritas) got something really
horribly wrong.
-- 
select 'cbbrowne' || '@' || 'cbbrowne.com';
http://www.ntlug.org/~cbbrowne/nonrdbms.html
Rules of the Evil Overlord #1. My Legions of Terror will have helmets
with   clearplexiglass   visors,   notface-concealing   ones.
http://www.eviloverlord.com/

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster