Re: [GENERAL] backup-strategies for large databases

2011-08-14 Thread Mikko Partio
On Sun, Aug 14, 2011 at 12:44 AM, MirrorX  wrote:

> the issue here is that the server is heavily loaded. the daily traffic is
> heavy, which means the db size is increasing every day (by 30 gb on
> average)
> and the size is already pretty large (~2TB).
>
> at the moment, the copy of the PGDATA folder (excluding pg_xlog folder),
> the
> compression of it and the storing of it in a local storage disk takes about
> 60 hours while the file size is about 550 GB. the archives are kept in a
> different location so that not a problem. so, i dont want even to imagine
> how much time the uncompress and copy will take in 'disaster' scenario.
>
> plus, we cannot keep the PGDATA in an older version and just replicate the
> wals b/c due to the heavy load they are about 150GB/day. so, even though
> that we can suppose that we have unlimited disk storage its not reasonable
> to use 5 TB for the wals (if the PGDATA is sent once a month) and
> furthermore a lot of time will be needed for the 2nd server to recover
> since
> it will have to process all this huge amount of wals.
>

We have a pretty similar situation, database size is ~3TB with daily xlog
generation of about 25G. We do a full backup (tar PGDATA + xlogs) every
fortnight and backup just the xlogs in between. The full backup takes almost
48h and is about 500G in size. All backups are gzipped of course.

The backup duration is not a problem, but the restore _might_ be. We have
restored this database more than once, and each time it got fully restored
surprisingly quick (a matter of hours). Of course if you have a 24/7
database this might not be acceptable, but then again if that's the case you
should have a standby ready anyways.

Regards

Mikko


Re: [GENERAL] Incrementally Updated Backups

2010-09-13 Thread Mikko Partio
>
> >> That section has been removed from the current 9.0 docs because we are
> >> unsure it works.
> >
> > Is the feature (or the documentation) still being worked on, or is
> pg_dump
> > the only way to take a backup of a warm standby while the database is
> > running?
>
> I don't think you can take a pg_dump of a warm standby without making
> recover.  But I can't see why you can't use a snapshot to recover a
> warm standby, since the file system will be just a base snapshot and a
> bunch of wal files.


Sorry, I got confused with the terms. What I meant was 'hot standby', the
new feature implemented in 9.0. I guess you can take a pg_dump out of a hot
standby, right?

Regards

Mikko


Re: [GENERAL] Incrementally Updated Backups

2010-09-13 Thread Mikko Partio
>
> > I'm interested in the "incrementally updated backups" scenario
> > described in section 25.6 of the Postgres 9 documentation. I've
> > configured streaming replication for my warm standby server.
> >
> > Step 2 in this procedure is to note?pg_last_xlog_replay_location at
> > the end of the backup. However it seems like this requires hot standby
> > to be configured; otherwise there is no way of connecting to the
> > standby machine to make the required query. That does not seem clear
> > from the documentation. Is there a way to get this without using hot
> > standby?
>
> That section has been removed from the current 9.0 docs because we are
> unsure it works.



Is the feature (or the documentation) still being worked on, or is pg_dump
the only way to take a backup of a warm standby while the database is
running?

Regards

Mikko


Re: [GENERAL] Postgres Clustering Options

2009-11-12 Thread Mikko Partio
On Fri, Nov 13, 2009 at 1:47 AM, David Kerr  wrote:

> In your enviornment, are the applications able to recover automatically
> after
> a DB failover?
>
> For exmaple, we're using Java/JDBC connections +Geronimo we're researching
> whether
> or not JDBC/Geronimo would be able to retry in the case of losing a
> connection to
> the DB vs failing and crashing the app.
>


Since the system is active/passive, a failover *will* disconnect all
sessions. We have coded our applications so that if they lose connection to
the database, they will automatically try to reconnect.

Regards

Mikko


Re: [GENERAL] Postgres Clustering Options

2009-11-11 Thread Mikko Partio
On Wed, Nov 11, 2009 at 7:28 PM, David Kerr  wrote:

> What I plan on doing is:
>
> Postgres installed on a Cluster configured in active/passive (both pointing
> to the same SAN
> (If PG or the OS fails we trigger a failover to the passive node)
>
> Is this a common/reccomended method of handling clusterin with Postgres?
> google searches
> basically point to using a replication based solution, which i don't think
> would meet my
> performance demands.
>
> Does anyone have expereince with this or a similar setup that they could
> share with me?
>


We have done a setup like this with Red Hat Cluster Suite.

We are quite happy with the setup in general, and it has been working well
even in 'unexpected circumstances' (power outages etc). The only thing I'd
change in this setup if I could is the cluster software: RHCS is not mature
enough and it seems every release contains new critical bugs, and sometimes
even mission-critical components such as quorum disk do not work after an
upgrade.

Regards

Mikko


Re: [GENERAL] connecting using libpq breaks printf

2009-02-18 Thread Mikko Partio
On Wed, Feb 18, 2009 at 4:47 PM, Joey Morris  wrote:

> This is my first attempt at using libpq, and I'm running across a strange
> problem. Here is my bare-bones program:
>
> #include 
> #include "libpq-fe.h"
>
> int main(int argc, char **argv) {
>  PGconn *conn;
>  fprintf(stderr, "connecting\n");
>  conn = PQconnectdb("dbname=postgres");
>  PQfinish(conn);
>  return 0;
> }
>


Works fine with linux + gcc. Must be something in your environment.

Regards

Mikko


Re: [GENERAL] two or more pg installations running as the same user

2008-11-25 Thread Mikko Partio
On Tue, Nov 25, 2008 at 12:37 PM, Grzegorz Jaśkiewicz <[EMAIL PROTECTED]>wrote:

> hey folks,
>
> I am thinking about testing enviroment here, and gotta setup temporary
> instalation of postgres for that. More than one, because software operates
> on few different connections, to different databases.
> I know I can do it under same user, just by changing PGDATA/PGPORT before
> initdb gets to run. But will it be safe otherwise ?
>

You know you don't need separate clusters (ie. installations) to have
multiple databases?

Regards

Mikko


Re: [GENERAL] "too many trigger records found for relation xyz"

2008-04-08 Thread Mikko Partio
On Tue, Apr 8, 2008 at 9:30 PM, Tom Lane <[EMAIL PROTECTED]> wrote:

> "Mikko Partio" <[EMAIL PROTECTED]> writes:
> > On Tue, Apr 8, 2008 at 6:38 PM, Zdenek Kotala <[EMAIL PROTECTED]>
> wrote:
> >> It is really strange. It would be nice to have reproduce scenario. Can
> you
> >> run same DDL command which invoke a catalog corruption on test database
> >> cluster?
>
> > I tried it on a fresh instance but it went through with no errors  :(
>
> Is it possible that anything was doing a VACUUM FULL on pg_class
> concurrently with your schema changes on these tables?  The behavior
> looks suspiciously like some problems we've seen before wherein rows
> ended up with the wrong commit-bit markings...


I may have had a VACUUM FULL on some of my own tables but definitely not on
any system tables. There was also a bunch (maybe 10) autovacuum processes
working at that time.

Regards

Mikko


Re: [GENERAL] "too many trigger records found for relation xyz"

2008-04-08 Thread Mikko Partio
On Tue, Apr 8, 2008 at 6:38 PM, Zdenek Kotala <[EMAIL PROTECTED]> wrote:

> Mikko Partio napsal(a):
>
>  On Tue, Apr 8, 2008 at 5:26 PM, Mikko Partio <[EMAIL PROTECTED]> wrote:
> >
> >  Ok I removed the faulty tuple and nothing catastrophical happened. I
> > > can
> > > do a pg_dump now, but I still can't remove the one remaining table:
> > >
> > > # drop table xyz ;
> > > ERROR:  too many trigger records found for relation "xyz"
> > >
> > > Any insight for this problem?
> > >
> > >
> > >
> > I managed to solve the situation with the help of this thread:
> > http://archives.postgresql.org/pgsql-bugs/2007-02/msg00167.php
> >
> > Everything seems to be ok for now, but I'm still curious what caused to
> > whole mess to begin with.
> >
>
> It is really strange. It would be nice to have reproduce scenario. Can you
> run same DDL command which invoke a catalog corruption on test database
> cluster?


I tried it on a fresh instance but it went through with no errors  :(

Regards

Mikko


Re: [GENERAL] "too many trigger records found for relation xyz"

2008-04-08 Thread Mikko Partio
On Tue, Apr 8, 2008 at 5:26 PM, Mikko Partio <[EMAIL PROTECTED]> wrote:

>
> Ok I removed the faulty tuple and nothing catastrophical happened. I can
> do a pg_dump now, but I still can't remove the one remaining table:
>
> # drop table xyz ;
> ERROR:  too many trigger records found for relation "xyz"
>
> Any insight for this problem?
>
>

I managed to solve the situation with the help of this thread:
http://archives.postgresql.org/pgsql-bugs/2007-02/msg00167.php

Everything seems to be ok for now, but I'm still curious what caused to
whole mess to begin with.

Thanks a lot for your help again.

Regards

Mikko


Re: [GENERAL] "too many trigger records found for relation xyz"

2008-04-08 Thread Mikko Partio
On Tue, Apr 8, 2008 at 1:45 PM, Zdenek Kotala <[EMAIL PROTECTED]> wrote:

> Mikko Partio napsal(a):
>
>  Should I try remove one of the duplicate rows from pg_class?
> >
>
> Try it with caution. You should use ctid column to refer to exact row.
>
>

Ok I removed the faulty tuple and nothing catastrophical happened. I can do
a pg_dump now, but I still can't remove the one remaining table:

# drop table xyz ;
ERROR:  too many trigger records found for relation "xyz"

Any insight for this problem?

Regards

Mikko


Re: [GENERAL] "too many trigger records found for relation xyz"

2008-04-08 Thread Mikko Partio
On Tue, Apr 8, 2008 at 2:49 PM, Zdenek Kotala <[EMAIL PROTECTED]> wrote:

> Mikko Partio napsal(a):
>
>  On Tue, Apr 8, 2008 at 1:45 PM, Zdenek Kotala <[EMAIL PROTECTED]>
> > wrote:
> >
> >  Mikko Partio napsal(a):
> > >
> > >  Should I try remove one of the duplicate rows from pg_class?
> > > Try it with caution. You should use ctid column to refer to exact row.
> > >
> > > Try before:
> > >
> > > select oid, ctid, xmin, xmax, cmin, cmax, relname from pg_class where
> > > oid
> > > = 294397;
> > >
> > > If both row are identical or not.
> > >
> > >
> >
> >  # select oid, ctid, xmin, xmax, cmin, cmax, relname from pg_class where
> > oid
> > = 294397;
> >  oid   |   ctid|xmin|xmax| cmin | cmax |   relname
> >
> > +---+++--+--+--
> >  294397 | (1125,36) | 3944654557 |  0 |   35 |   35 | abc
> >  294397 | (1124,55) | 3943984380 | 3943984642 |3 |3 | abc
> > (2 rows)
> >
> > Which one should I remove? What is the worst case scenario for the
> > delete?
> > The database is 1,5TB so I would rather not restore it from backups :-)
> >
>
> (1124,55) is deleted one. This row should not be visible. But it seems a
> problem with visibility - transaction wraparound. Do you have any warning
> message in the log about vacuuming? Or clog corruption when transaction
> 3943984642 is marked as rollbacked.
>


There are no such messages in the logfile, but  autovacuum is working hard
to prevent xid wraparound (we have an import process which consums huge
amounts of xids).

# select max(age(relfrozenxid)) from pg_class where relkind = 'r' ;
max

 204500712
(1 row)

How can I tell if there is clog corruption?

Thanks a lot for your help.

Regards

Mikko


Re: [GENERAL] "too many trigger records found for relation xyz"

2008-04-08 Thread Mikko Partio
On Tue, Apr 8, 2008 at 1:45 PM, Zdenek Kotala <[EMAIL PROTECTED]> wrote:

> Mikko Partio napsal(a):
>
>  Should I try remove one of the duplicate rows from pg_class?
> >
>
> Try it with caution. You should use ctid column to refer to exact row.
>
> Try before:
>
> select oid, ctid, xmin, xmax, cmin, cmax, relname from pg_class where oid
> = 294397;
>
> If both row are identical or not.
>


 # select oid, ctid, xmin, xmax, cmin, cmax, relname from pg_class where oid
= 294397;
  oid   |   ctid|xmin|xmax| cmin | cmax |   relname
+---+++--+--+--
 294397 | (1125,36) | 3944654557 |  0 |   35 |   35 | abc
 294397 | (1124,55) | 3943984380 | 3943984642 |3 |3 | abc
(2 rows)

Which one should I remove? What is the worst case scenario for the delete?
The database is 1,5TB so I would rather not restore it from backups :-)

Regards

Mikko


Re: [GENERAL] "too many trigger records found for relation xyz"

2008-04-08 Thread Mikko Partio
On Tue, Apr 8, 2008 at 1:19 PM, Zdenek Kotala <[EMAIL PROTECTED]> wrote:

> Mikko Partio napsal(a):
>
> >
> >
> > I little investigation showed that there is duplicate row in pg_class:
> >
> > # select oid from pg_class group by oid having count(*) > 1 ;
> >  oid
> > 
> >  294397
> > (1 row)
> >
>
> Could you check if pg_attribute is doubled for this relation?


It looks like pg_attribute has only a single value:

# select * from pg_attribute where attrelid = 294397 ;
 attrelid | attname  | atttypid | attstattarget | attlen | attnum | attndims
| attcacheoff | atttypmod | attbyval | attstorage | attalign | attnotnull |
atthasdef | attisdropped | attislocal | attinhcount
--+--+--+---+++--+-+---+--++--++---+--++-
   294397 | tableoid |   26 | 0 |  4 | -7 |0
|  -1 |-1 | t| p  | i| t  |
f | f| t  |   0
   294397 | cmax |   29 | 0 |  4 | -6 |0
|  -1 |-1 | t| p  | i| t  |
f | f| t  |   0
   294397 | xmax |   28 | 0 |  4 | -5 |0
|  -1 |-1 | t| p  | i| t  |
f | f| t  |   0
   294397 | cmin |   29 | 0 |  4 | -4 |0
|  -1 |-1 | t| p  | i| t  |
f | f| t  |   0
   294397 | xmin |   28 | 0 |  4 | -3 |0
|  -1 |-1 | t| p  | i| t  |
f | f| t  |   0
   294397 | ctid |   27 | 0 |  6 | -1 |0
|  -1 |-1 | f| p  | s| t  |
f | f| t  |   0
   294397 | id   |   21 |-1 |  2 |  1 |0
|  -1 |-1 | t| p  | s| t  |
f | f| t  |   0
   294397 | name | 1043 |-1 | -1 |  2 |0
|  -1 |16 | f| x  | i| t  |
f | f| t  |   0
   294397 | desc_fi  |   25 |-1 | -1 |  3 |0
|  -1 |-1 | f| x  | i| t  |
f | f| t  |   0
   294397 | desc_en  |   25 |-1 | -1 |  4 |0
|  -1 |-1 | f| x  | i| t  |
f | f| t  |   0
(10 rows)

Should I try remove one of the duplicate rows from pg_class?

Regards

Mikko


Re: [GENERAL] "too many trigger records found for relation xyz"

2008-04-08 Thread Mikko Partio
On Tue, Apr 8, 2008 at 12:20 PM, Mikko Partio <[EMAIL PROTECTED]> wrote:

>
>
> On Tue, Apr 8, 2008 at 11:27 AM, Zdenek Kotala <[EMAIL PROTECTED]>
> wrote:
>
> >
> >
> > Mikko Partio napsal(a):
> >
> > > Hello all
> > >
> > > Postgres version 8.3.1
> > >
> > > I just created a bunch of tables (~10) with identical structure; all
> > > tables
> > > have 6 foreign key references to other tables and a primary key. To my
> > > surprise, some of the tables were created ok, some missed primary key
> > > and
> > > some didn't get created at all.
> > >
> >
> > Can you provide self contained reproduction scenario?
> >
>
>
> Unfortunately not -- if I try to dump one the referenced tables I get this
> error:
>
> pg_dump: SQL command failed
> pg_dump: Error message from server: ERROR:  more than one row returned by
> a subquery used as an expression
> pg_dump: The command was: SELECT tableoid, oid, typname, typnamespace,
> (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = typowner) as rolname,
> typinput::oid as typinput, typoutput::oid as typoutput, typelem, typrelid,
> CASE WHEN typrelid = 0 THEN ' '::"char" ELSE (SELECT relkind FROM pg_class
> WHERE oid = typrelid) END as typrelkind, typtype, typisdefined, typname[0] =
> '_' AND typelem != 0 AND (SELECT typarray FROM pg_type te WHERE oid =
> pg_type.typelem) = oid AS isarray FROM pg_type
>


I little investigation showed that there is duplicate row in pg_class:

# select oid from pg_class group by oid having count(*) > 1 ;
  oid

 294397
(1 row)

# \x
Expanded display is on.

# select * from pg_class where oid = 294397;
-[ RECORD 1 ]--+-
relname| abc
relnamespace   | 2200
reltype| 294399
relowner   | 16385
relam  | 0
relfilenode| 294397
reltablespace  | 0
relpages   | 1
reltuples  | 3
reltoastrelid  | 294400
reltoastidxid  | 0
relhasindex| t
relisshared| f
relkind| r
relnatts   | 4
relchecks  | 0
reltriggers| 6
relukeys   | 0
relfkeys   | 0
relrefs| 0
relhasoids | f
relhaspkey | t
relhasrules| f
relhassubclass | f
relfrozenxid   | 3840767676
relacl | {removed}
reloptions |

-[ RECORD 2 ]--+-
relname| abc
relnamespace   | 2200
reltype| 294399
relowner   | 16385
relam  | 0
relfilenode| 294397
reltablespace  | 0
relpages   | 1
reltuples  | 3
reltoastrelid  | 294400
reltoastidxid  | 0
relhasindex| t
relisshared| f
relkind| r
relnatts   | 4
relchecks  | 0
reltriggers| 12
relukeys   | 0
relfkeys   | 0
relrefs| 0
relhasoids | f
relhaspkey | t
relhasrules| f
relhassubclass | f
relfrozenxid   | 3840767676
relacl | {removed}
reloptions |

Regards

Mikko


Re: [GENERAL] "too many trigger records found for relation xyz"

2008-04-08 Thread Mikko Partio
On Tue, Apr 8, 2008 at 11:27 AM, Zdenek Kotala <[EMAIL PROTECTED]>
wrote:

>
>
> Mikko Partio napsal(a):
>
> > Hello all
> >
> > Postgres version 8.3.1
> >
> > I just created a bunch of tables (~10) with identical structure; all
> > tables
> > have 6 foreign key references to other tables and a primary key. To my
> > surprise, some of the tables were created ok, some missed primary key
> > and
> > some didn't get created at all.
> >
>
> Can you provide self contained reproduction scenario?
>


Unfortunately not -- if I try to dump one the referenced tables I get this
error:

pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  more than one row returned by a
subquery used as an expression
pg_dump: The command was: SELECT tableoid, oid, typname, typnamespace,
(SELECT rolname FROM pg_catalog.pg_roles WHERE oid = typowner) as rolname,
typinput::oid as typinput, typoutput::oid as typoutput, typelem, typrelid,
CASE WHEN typrelid = 0 THEN ' '::"char" ELSE (SELECT relkind FROM pg_class
WHERE oid = typrelid) END as typrelkind, typtype, typisdefined, typname[0] =
'_' AND typelem != 0 AND (SELECT typarray FROM pg_type te WHERE oid =
pg_type.typelem) = oid AS isarray FROM pg_type

Regards

Mikko


[GENERAL] "too many trigger records found for relation xyz"

2008-04-08 Thread Mikko Partio
Hello all

Postgres version 8.3.1

I just created a bunch of tables (~10) with identical structure; all tables
have 6 foreign key references to other tables and a primary key. To my
surprise, some of the tables were created ok, some missed primary key and
some didn't get created at all.

Postgres complained:

2008-04-08 08:50:41 EEST [721]: [27-1] ERROR:  relation 543058 deleted while
still in use

and

2008-04-08 08:50:41 EEST [721]: [36-1] ERROR:  could not find tuple for
relation 54309

and

2008-04-08 08:50:42 EEST [721]: [114-1] ERROR:  duplicate key value violates
unique constraint "pg_class_oid_index"

When I tried to remove the tables, the same errors kept coming up

2008-04-08 08:53:15 EEST [27794]: [5-1] ERROR:  duplicate key value violates
unique constraint "pg_class_oid_index"
2008-04-08 08:53:15 EEST [27794]: [6-1] STATEMENT:  drop table xyz ;

I searched google for a while, and then tried again and this time I was able
to drop all but one of the tables. When I try to drop the one remaining
table, I get this error message:

2008-04-08 10:09:31 EEST [27794]: [95-1] ERROR:  too many trigger records
found for relation "xyz"

The database had a bunch of autovacuum runs and an import process copying
data to temporary tables at the time I tried to create the tables. What
could cause such behaviour, and how can I remove the one remaining table?

Regards

Mikko


Re: [GENERAL] SPI-functions and transaction control

2008-02-20 Thread Mikko Partio
On Feb 19, 2008 12:12 AM, Alvaro Herrera <[EMAIL PROTECTED]> wrote:

> Mikko Partio escribió:
>
> > Now, I was wondering if a c function would be faster, and with the help
> of
> > the manual I have written a function that can insert tuples from one
> table
> > to another. As the manual states (
> > http://www.postgresql.org/docs/8.3/interactive/spi.html), there is no
> way to
> > catch the constraint violation error with SPI though. The manual still
> > mentions that there is an undocumented way of doing this, has anybody
> ever
> > done this? I was looking the 'exception' code at
> > src/pl/plpgsql/src/pl_exec.c but it's quite frightening since I really
> don't
> > have that much experience in c :)
>
> I think you should try DefineSavepoint, RollbackToSavepoint and
> ReleaseSavepoint.  Take a close look at plpgsql's callers for those
> functions, because I don't think it's all that straightforward.  OTOH
> you also need some PG_TRY blocks.  There are many fine details here,
> perhaps too many if you're not battered enough in C.
>
> I think I would suggest trying to do it purely with SQL, temp tables,
> etc.  If you can afford a table lock, it could be a lot easier and
> faster than setting a savepoint per tuple.



The import is done with a temp table, the bottleneck being the copying of
rows from the temp table to the actual table.

I managed to solve the issue with PG_TRY blocks (idea copied from
http://archives.postgresql.org/pgsql-hackers/2006-02/msg00836.php).  As you
said, I'm not battered with c so I guess not all the cornercases are handled
but for now it seems to work ok.
The c-version of the function is in fact 5 times faster than the original
plpgsql version.

Regards

Mikko


[GENERAL] SPI-functions and transaction control

2008-02-17 Thread Mikko Partio
Hello list

I am trying to write a function in c that would 'merge' two tables together.


The idea is that we insert rows from one table to another, and if there is a
constraint violation, update the old row with the new row. I have done this
succesfully with plpgsql, but alas, the tables are so big that it takes
quite some time for the function to finish.

Now, I was wondering if a c function would be faster, and with the help of
the manual I have written a function that can insert tuples from one table
to another. As the manual states (
http://www.postgresql.org/docs/8.3/interactive/spi.html), there is no way to
catch the constraint violation error with SPI though. The manual still
mentions that there is an undocumented way of doing this, has anybody ever
done this? I was looking the 'exception' code at
src/pl/plpgsql/src/pl_exec.c but it's quite frightening since I really don't
have that much experience in c :)

Regards

Mikko


[GENERAL] rpm's for 8.3 beta 4

2007-12-03 Thread Mikko Partio
Hello

is there going to be an rpm release of beta 4 (for RHEL 5)? It seems that
beta 2 did have red hat rpms but beta 3 did not.

Regards

Mikko


Re: [GENERAL] Populating large DB from Perl script

2007-11-02 Thread Mikko Partio
On Nov 2, 2007 8:45 PM, Kynn Jones <[EMAIL PROTECTED]> wrote:

> PS: As an aside to the list, as a programmer, when I'm starting out in
> language, I learn more than I can say from reading source code written
> by the experts, but for some reason I have had a hard time coming
> across expertly written PostgreSQL stored procedures, other than the
> occasionally didactic snippet in the docs.  All these expertly-written
> procedures seem to be very STORED away indeed!  If, on the contrary,
> it's just the case that I haven't looked in the right places, please
> hurl me a cluebrick!

It would be great if there was a stored proc-archive somewhere in the
web where people could post their procedures. I know there are some
code examples in the official documentation but they are few in
numbers.

Regards

M

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Automating Backup & Restor

2007-10-24 Thread Mikko Partio
On 10/24/07, smiley2211 <[EMAIL PROTECTED]> wrote:
>
>
> Hello all,
>
> Does someone have a script that backups a database from one server and
> restores it to another server???  I am NEW to Postgresql so I am starting
> from scratch...
>
> so, in essence - what I want to do is  (I CURRENTLY DO THIS MANUALLY):
>
> Server1 (IS IT POSSIBLE TO DUMP DIRECTLY TO SERVER2??)

pg_dump > filename.dmp
> scp filename.dmp to SERVER2
>
> Server2
> psql -d mydb -f filename.dmp
> vacuum
> reindex



server1$ pg_dump mydb |  psql -h server2 -d mydb


Regards

M


Re: [GENERAL] multiple row insertion

2007-10-04 Thread Mikko Partio
On 10/4/07, test tester <[EMAIL PROTECTED]> wrote:
>
> In MySQL, I can insert multiple rows like this:
>
>
> insert into cars values(5, "toyota"),(5,"ford"), etc.
>
>
> How can I do something similiar in PostgreSQL?
>
>
Exactly the same way. Make sure though that your pgsql is new enough version
(8.2 ?).

Regards

MP


Re: [GENERAL] Problem dropping table

2007-09-18 Thread Mikko Partio
On 9/18/07, Ken Logan <[EMAIL PROTECTED]> wrote:
>
>
> On Tue, 2007-09-18 at 11:10 -0700, Alan Hodgson wrote:
>
> On Tuesday 18 September 2007 10:30, Ken Logan <[EMAIL PROTECTED]> wrote:> 
> When we try to drop the table we get the error:> ERROR:  "member_pkey" is an 
> index
> You have to remove the table from it's Slony set before you can drop it. 
> Slony does some hackish things to subscriber tables that make them unusable 
> for normal DDL operations.
> Your master probably isn't too thrilled, either, since it probably thinks the 
> table is still in a set.
>
>
> We actually used slonik_drop_table so its been properly removed from
> slony.
>
>  In any case, you'd probably have better luck with this on the Slony list. 
> I'm not at all sure your problem can be fixed without discarding the slave, 
> but someone there might know.
>
>
> I was afraid of that... since there doesn't seem to be any remnants left
> behind by slony affecting this table I'm not sure I can ask a sensible
> question about it on the slony list, so I guess I'll need to either dig into
> the source for what the postgresql error message means or reinit the slave
> (again).
>

It is a known issue with slony (the foreign key causes it), check the slony
mailing list archives for more info. AFAIK you have to use slonik and
EXECUTE SCRIPT ... EXECUTE ONLY ON NODE x to remove the table.

Regards

MP


Re: [GENERAL] Symlinking (specific) tables to different Drives

2007-09-03 Thread Mikko Partio
On 9/4/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
>
> On Tue, 2007-09-04 at 07:09 +0300, Mikko Partio wrote:
> >
> >
> > On 9/4/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > I'm running out of space on one of my partitions and I still
> > have not
> > gotten all the data loaded yet. I've read that one could
> > symlink the
> > pg_pg_xlog directory to another drive. I'm wondering if I can
> > do the
> > same for specific tables as well.
> >
> >
> > Create another tablespace to the new location and the ALTER TABLE ...
> > TABLESPACE newtablespace.
> >
>
> OOooohhh... I didn't know one could use tablespaces like that. (I mean,
> I did read the docs, but it just didn't register that it _can_ do
> something like that)
>
> additional question.. do I need to change the search_path?


No (changing tablespaces does not change your logical schema).

Regards

MP


Re: [GENERAL] Symlinking (specific) tables to different Drives

2007-09-03 Thread Mikko Partio
On 9/4/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I'm running out of space on one of my partitions and I still have not
> gotten all the data loaded yet. I've read that one could symlink the
> pg_pg_xlog directory to another drive. I'm wondering if I can do the
> same for specific tables as well.



Create another tablespace to the new location and the ALTER TABLE ...
TABLESPACE newtablespace.


Thanks.
>
> I've already done a pg_dump of the entire schema but have not dropped /
> re-init the DB to another location cos I'm afraid I'll lose some items.
> (I've to drop the DB, format the partition, merge it w/ another
> partition and re-init the DB then restore the DB from the dump)
>
> sigh.. wish it was easier, (meaning, like SQL Server where one can
> detach an entire DB/tablespace and then re-attach it elsewhere)


If you are moving the whole cluster and can afford the downtime, you can
shutdown the postmaster, move $PGDATA to a new location and then start
postmaster from that new location.

Regards

MP


Re: [GENERAL] "out of memory" error

2007-08-24 Thread Mikko Partio
On 8/23/07, Martijn van Oosterhout <[EMAIL PROTECTED]> wrote:
>
>
> You've got it completely wrong. By setting shared_buffers to 2GB it
> means no-one can use it. It's not postgres that's running out of
> memory, it's the rest of your system. Set it to something sane like
> 128MB or maybe smaller.



Isn't 128MB quite low considering the "current standard" of 25% - 50% of
total ram?

Regards

MP


Re: [GENERAL] PostgreSQL clustering (shared disk)

2007-08-17 Thread Mikko Partio
On 8/17/07, Hannes Dorbath <[EMAIL PROTECTED]> wrote:
>
> On 17.08.2007 11:12, Mikko Partio wrote:
> > Maybe I'm just better off using the more simple (crude?) method of drbd
> +
> > heartbeat?
>
> Crude? Use what you like to use, but you should keep one thing in mind:
> If you don't know the software you are running in each and every detail,
> how it behaves in each and every situation you can think of, it's a bad
> idea to use it in a HA setup.
>
> You don't want to be one of those admins that just configured something
> in a few days, moved production stuff on it and fail to recover from a
> split brain situation. Setting up a HA environment is something you do
> in months, not days, at least if you want to do it right. There is so
> much that can go wrong, and so much to learn. Keep it simple.
>


Exactly my thoughts, as I have some experience with drbd and I know it
works. My point was that since I have access to a san environment, a shared
storage would be a more "elegant" solution, but as you pointed out it's
probably better to stick to the method that feels most comfortable.

Thanks for your thoughts.

Regards

MP


Re: [GENERAL] PostgreSQL clustering (shared disk)

2007-08-17 Thread Mikko Partio
On 8/16/07, Douglas McNaught <[EMAIL PROTECTED]> wrote:
>
> Devrim GÜNDÜZ <[EMAIL PROTECTED]> writes:
>
> >> What I'm pondering here is that is the cluster able to keep the
> >> postmasters synchronized at all times so that the database won't get
> >> corrupted.
> >
> > Keep all the $PGDATA in the shared disk. That would minimize data loss
> > (Of course, there is still a risk of data loss -- the postmasters are
> > not aware of each other and they don't share each other's buffers, etc.)
>
> It would be much better to have the cluster software only run one
> postmaster at a time, starting up the secondary if the primary fails.
> That's the usual practice with shared storage.



This was my original intention. I'm still quite hesitant to trust the
fencing devices ability to quarantee that only one postmaster at a time is
running, because of the disastrous possibility of corrupting the whole
database.

Maybe I'm just better off using the more simple (crude?) method of drbd +
heartbeat?

Regards

MP


[GENERAL] PostgreSQL clustering (shared disk)

2007-08-15 Thread Mikko Partio
Hello list,

I have a mission to implement a two-node active-passive PostgreSQL cluster.
The databases at the cluster are rather large (hundreds of GB's) which opts
me to consider a shared disk environment. I know this is not natively
supported with PostgreSQL, but I have been investigating the Red Hat Cluster
Suite with GFS. The idea would be that the cluster programs with gfs (and HP
ilo) would make sure that only one postmaster at a time would be able to
access the shared disk, and in case the active node fails the cluster
software would shift the services to the previously passive node. What I'm
pondering here is that is the cluster able to keep the postmasters
synchronized at all times so that the database won't get corrupted.

Is there anyone on the list that has seen such configuration, or, even
better, implemented it themselves? I found a small document by Devrim Gunduz
describing this scenario but it was rather scant on details.

If shared disk is definitely out of the question, the fallback plan would be
to use drbd and linux-ha.

Regards

MP


Re: [GENERAL] Moving to postgresql and some ignorant questions

2007-08-14 Thread Mikko Partio
On 8/14/07, Bill Moran <[EMAIL PROTECTED]> wrote:

>
> But the rule is, if any query within the transaction errors, then all
> queries
> within the transaction are rolled back.



This is the default behaviour, but with psql and ON_ERROR_ROLLBACK parameter
the behaviour can be changed. See
http://www.postgresql.org/docs/8.2/interactive/app-psql.html

Regards

MP


Re: [GENERAL] Unexpected crash

2007-07-18 Thread Mikko Partio

On 7/19/07, Richard Huxton <[EMAIL PROTECTED]> wrote:


Mikko Partio wrote:

OK. This is RH Enterprise Server, then?



Yes.
cat /etc/issue

Red Hat Enterprise Linux ES release 3 (Taroon Update 8)
Kernel \r on an \m



Is there a server log-line saying you have a sig-11 crash?


No that is the only line (with verbosity DEBUG2).

What's puzzling me is why shift+return is different from just plain

return (which presumably works). I'd suspect readline or similar. Try
something like "rpm -q --requires postgresql-client" (you'll need to
check the details, haven't used rpm much recently) to see what packages
psql is depending on. Then just check they look OK for your installation.




Here's the results for that query but I don't know how interpret them:

rpm -q --requires postgresql-8.2.4

/sbin/ldconfig
initscripts
libc.so.6
libc.so.6(GLIBC_2.0)
libc.so.6(GLIBC_2.1)
libc.so.6(GLIBC_2.1.2)
libc.so.6(GLIBC_2.1.3)
libc.so.6(GLIBC_2.2)
libc.so.6(GLIBC_2.3)
libcrypt.so.1
libcrypto.so.4
libdl.so.2
libkrb5.so.3
libm.so.6
libpam.so.0
libpq.so.5
libreadline.so.4
libssl.so.4
libtermcap.so.2
libz.so.1
rpmlib(CompressedFileNames) <= 3.0.4-1
rpmlib(PayloadFilesHavePrefix) <= 4.0-1


Thanks for your help.

Regards

MP


Re: [GENERAL] Unexpected crash

2007-07-18 Thread Mikko Partio

On 7/19/07, Tom Lane <[EMAIL PROTECTED]> wrote:


"Mikko Partio" <[EMAIL PROTECTED]> writes:
> I installed the latest version from rpms and everythings ok, except when
I
> connect to a db with psql and press shift+return the backend crashes
with
> "Segmentation fault"!

This is not a "backend crash", you are SIGQUIT-ing your psql session.
Check your terminal settings, because that's not the usual key
combination for SIGQUIT.

regards, tom lane




You are right (as usual). I tried with a another terminal and it didn't
produce the same effects. Thanks a lot.

Regards

MP


[GENERAL] Unexpected crash

2007-07-18 Thread Mikko Partio

Hello all,

I installed the latest version from rpms and everythings ok, except when I
connect to a db with psql and press shift+return the backend crashes with
"Segmentation fault"! I guess the problem is with my installation but I
don't know how to debug. It's not a very disconcerning thing per se, but I
wonder what other surprises might be hiding behind the scenes.

test2=> select version();
  version
--
PostgreSQL 8.2.4 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC)
3.2.320030502 (Red Hat Linux
3.2.3-58)
(1 row)

relevant log line:

Jul 19 08:33:36 mihin postgres[24584]: [2-1] LOG:  unexpected EOF on client
connection


Regards

MP


Re: [GENERAL] tables are not listable by \dt

2007-07-02 Thread Mikko Partio

On 7/2/07, Rajarshi Guha <[EMAIL PROTECTED]> wrote:


Hi, we recently upgraded our Postgres instllation from 7.4 to 8.2 by
doing a dump and restore. Howveer after logging into the database (as
a user that is not the superuser) and doing \dt I get the error:

No relations found




Are you using the 8.2 version of psql?

Regards

MP


Re: [GENERAL] data partitions across different nodes

2007-06-25 Thread Mikko Partio

On 6/26/07, 金星星 <[EMAIL PROTECTED]> wrote:


Are there any solutions based on PostgreSQL that can support
distributing partitions (horizontal fragmentations) across different
nodes. It doesn't need to support distributed transaction, since data
inconsistent is not a critical problem in my situation.




pgpool-II might be what you're looking for.

http://pgpool.projects.postgresql.org/pgpool-II/en/

Regards

MP


Re: [GENERAL] Permission denied to create database

2007-05-04 Thread Mikko Partio

>
> $ sudo -u postgres psql -c "grant all on tablespace pg_default to joe"
> Password:
> GRANT
>
> $ createdb -U joe joejunkdb
> createdb: database creation failed: ERROR:  permission denied to create
> database
>

How about ALTER ROLE joe CREATEDB

Regards

MP


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL]

2007-04-16 Thread Mikko Partio
Nico Sabbi wrote:
>
>
> but "cp /tmp/pg//00010021 pg_xlog/RECOVERYXLOG" looks
suspicious to me.
>

Works for me [clip from process listing]:

postgres 12864 12863  0 18:36 ?00:00:00 cp
/wal/000100E10035 pg_xlog/RECOVERYXLOG

I think RECOVERYXLOG should be overwritten every time (the -l mode also
recreates RECOVERYXLOG link that points to the latest wal).

BTW thanks to Simon Riggs for pg_standby, it works very well!

Regards

MP



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Anyone know where I can get an 8.2.3 binary for ubuntu?

2007-03-31 Thread Mikko Partio
> Subject says it all.  Doing a source compile under Debian or
> Debian-like condition is not an option for the end user.  They need
> an apt-get (the ubuntu equivalent to rpm AFAICT) version.
>
> Unfortunately, the latest I can find is 8.1.8
> Where's 8.2.3?

>

What ubuntu version do you have? I believe that 8.2.3 is at the
repositories for edgy and feisty. Or download the .debs from
https://launchpad.net/ubuntu/+source/postgresql-8.2

Regards

MP


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] PG Dump 8.2

2007-03-03 Thread Mikko Partio

Joshua D. Drake wrote:

Mikko Partio wrote:
  

Laurent ROCHE wrote:


Hi,

Does anyone know if Ubuntu 6.10 (LTS) has packages for PostgreSQL 8.2,
from a recommandable place ?

I can not recommend my clients to use 8.2 if there's nobody supporting
the packages (so just compiling from the source code is not an option).

  


To my knowledge there are no 8.2 binaries for Ubuntu LTS. The best you
could probably get is 8.1 which is a perfectly acceptable and stable
release.

Sincerely,

Joshua D. Drake


Well, there are 8.2 binaries for Ubuntu 6.10 [1], but for LTS (6.04) 8.1 
is the latest version available.


Regards

MP


[1] https://launchpad.net/ubuntu/+source/postgresql-8.2

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PG Dump 8.2

2007-03-02 Thread Mikko Partio

Laurent ROCHE wrote:

Hi,

Does anyone know if Ubuntu 6.10 (LTS) has packages for PostgreSQL 8.2, from a 
recommandable place ?

I can not recommend my clients to use 8.2 if there's nobody supporting the 
packages (so just compiling from the source code is not an option).

  


They are right at the repositories.

Regards

MP

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Cast record as text SOLVED

2007-02-17 Thread Mikko Partio

Mikko Partio wrote:


I agree that the ability to restore changes is quite nice, but my 
primary goal is to record changes from many tables into one table, and 
I think tablelog does not offer that. Do you know any way of casting a 
record to text, or perhaps a different way altogether to audit to one 
table? It's hard to believe I am the first person to come up to this 
problem.


Regards

MP


Got it solved with pl/perl, guess pl/pgsql was the wrong choice of 
language for a dynamic thing such as this.


Regards

MP

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Cast record as text

2007-02-17 Thread Mikko Partio

A. Kretschmer wrote:

My original idea was to log changes from different tables to one audit
table, and I think tablelog uses separate audit tables for each monitored
table?



Yes, but with tablelog it is possible to restore any changes, you can
restore a table.

A blog-entry from Andreas Scherbaum, the maintainer, about tablelog:
http://ads.wars-nicht.de/blog/archives/100-Log-Table-Changes-in-PostgreSQL-with-tablelog.html
  


I agree that the ability to restore changes is quite nice, but my 
primary goal is to record changes from many tables into one table, and I 
think tablelog does not offer that. Do you know any way of casting a 
record to text, or perhaps a different way altogether to audit to one 
table? It's hard to believe I am the first person to come up to this 
problem.


Regards

MP

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Cast record as text

2007-02-14 Thread Mikko Partio
> Why do you want to reinvent the wheel?
>
> http://pgfoundry.org/projects/tablelog/
>
>
> But it use a separate log-table per table.
>
>
> Andreas

My original idea was to log changes from different tables to one audit
table, and I think tablelog uses separate audit tables for each monitored
table?

Regards

MP


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Client connection: Port 5432 / Postgres

2007-02-08 Thread Mikko Partio

Nicolas Gignac wrote:
I have installed Postgres 8.2 on a internal server having Windows 
Server 2003 (IIS 6) up and running.
- I have configure the hp_config file to: host
all 
0.0.0.0./0md5
   ^   


I think it should be like:

host   all   all   0.0.0.0/0   md5


MP

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] CONNECT privilege

2006-11-26 Thread Mikko Partio

Joris Dobbelsteen wrote:



When using pgAdmin-III it does not display the granted CONNECT priviledge.
Also when doing GRANT CONNECT FOR DATABASE  TO  it 
succeeds, but I fail to observe its effects: i.e. users are not allowed 
to a database.





Have you modified pg_hba.conf to allow the new users to login?


MP


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Simple stored procedure examples?

2006-11-04 Thread Mikko Partio

 > novnov wrote:

Thanks to both of you for responding. I should have included the code for
my own attempt, at #1 which is just as you suggest:

update item set itemname = 'fox';

I've tried single, and double quoting the table and field names; call caps
to the UPDATE etc, exactly matching the capitalization of the table and
field names (really Item and ItemName).

I wonder if "Item" is a reserved word in pgsql?




I think you haven't quoted the field names correctly.


dun=# CREATE TABLE "Item" (id int4, "ItemName" text);
CREATE TABLE
dun=# INSERT INTO "Item" VALUES(1,'aaa');
INSERT 0 1
dun=# UPDATE "Item" SET "ItemName" = 'fox';
UPDATE 1
dun=# SELECT * FROM "Item";
 id | ItemName
+--
  1 | fox
(1 row)


If you want to have case-sensitive names, you have to have double quotes.


MP

---(end of broadcast)---
TIP 6: explain analyze is your friend