Re: [GENERAL] Backing up through a database connection (not pg_dump)

2012-03-27 Thread Eduardo Morras

At 01:05 27/03/2012, Tim Uckun wrote:

Is there a way to backup a database or a cluster though a database
connection?  I mean I want to write some code that connects to the
database remotely and then issues a backup command like it would issue
any other SQL command. I realize the backups would need to reside on
the database server.


You can use a stored procedure with this pl 
http://plsh.projects.postgresql.org/ , like this:



CREATE FUNCTION dump_db(text, text) RETURNS text AS '
#!/bin/sh
pg_dump $1  $2
' LANGUAGE plsh;


Note that you must CREATE LANGUAGE first, $1 is db_name, $2 is file 
name and check for write permissions of $2. Be careful and check your 
security because any shell script will run as postgres user. Don't 
know if it works with 9.x+ versions of postgres.





HTH 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backing up through a database connection (not pg_dump)

2012-03-27 Thread hubert depesz lubaczewski
On Tue, Mar 27, 2012 at 12:05:00PM +1300, Tim Uckun wrote:
 Is there a way to backup a database or a cluster though a database
 connection?  I mean I want to write some code that connects to the
 database remotely and then issues a backup command like it would issue
 any other SQL command. I realize the backups would need to reside on
 the database server.

Like this:
http://www.depesz.com/2011/01/24/waiting-for-9-1-pg_basebackup/ ?

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problem with installation

2012-03-27 Thread nissyre
Hi,
  I need to install PostgreSQL 9 but in silence mode (like in MSI Installer for 
Postgres 8.3). Is it possible with PostgreSQL 9.0 or higher?



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with installation

2012-03-27 Thread Ashesh Vashi
Use --mode unattended option with one-click installer.
You can always check other options using --help option of the installer.

--

Thanks  Regards,

Ashesh Vashi
EnterpriseDB INDIA: Enterprise PostgreSQL Companyhttp://www.enterprisedb.com



*http://www.linkedin.com/in/asheshvashi*http://www.linkedin.com/in/asheshvashi



2012/3/27 niss...@wp.pl

 Hi,
  I need to install PostgreSQL 9 but in silence mode (like in MSI Installer
 for Postgres 8.3). Is it possible with PostgreSQL 9.0 or higher?



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Problem with installation

2012-03-27 Thread John R Pierce

On 03/27/12 1:46 AM, niss...@wp.pl wrote:

Hi,
   I need to install PostgreSQL 9 but in silence mode (like in MSI Installer 
for Postgres 8.3). Is it possible with PostgreSQL 9.0 or higher?


what OS? on centos/redhat/fedora type linux,

yum -y install postgresql91-{server,contrib,devel}

runs without user intervention.  if you want to surpress the progress 
messages, add /dev/null  to the end.




--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with installation

2012-03-27 Thread Thomas Kellerer

niss...@wp.pl, 27.03.2012 10:46:

Hi,
   I need to install PostgreSQL 9 but in silence mode (like in MSI Installer 
for Postgres 8.3). Is it possible with PostgreSQL 9.0 or higher?


Why not use the binary ZIP distribution?

Steps for installing are then:

1) Unzip the archive
2) Run initdb
3) Run pg_ctl register to create the Windows service

If your own installer can unzip ZIP archives and start external programs, that 
should be easy to integrate.

Thomas




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Facing error while restoring the database

2012-03-27 Thread Akshay Joshi
Hi

I am facing issue while restoring the database. I have taken the backup of
my database using pg_dump and then create new database and try to restore
it using pg_restore. I am using PostgreSQL 9.0.
I have created one small test case to reproduce the issue, attached is the
sql file.

*Steps to reproduce:-*

   1. Create new database 'Test_Backup' and run the attached .sql file.
   2. Take backup of the above mentioned database using pg_dump
   3. Create new database 'Test_Restore, and try to restore using
   pg_restore.

As per my understanding, problem is with the pg_dump, need confirmation.
After analyzing the dump file(attached with the mail) it seems that *COPY
sample* statement comes before *COPY sample_one *which is wrong because
test.sample.ref_id is the REFERENCES of test.sample_one(id) table and I
have created one constraint on sample table which is getting failed during
restore.

One thing that I found is if we rename the *'sample'* table to* 'sample2'*and
*'sample_one'* table to *'sample1'*,* *bug is not reproducible. So it seems
that pg_dump take the backup sequential rather resolving the dependencies.

Is there any way to solve this issue without renaming the tables? Please
help.

-- 
*Akshay Joshi
Senior Software Engineer
EnterpriseDB Corporation
The Enterprise PostgreSQL Company
Phone: +91 20-3058-9522
Mobile: +91 976-788-8246*


TestCase.sql
Description: Binary data


sample_plain.backup
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PANIC: corrupted item pointer

2012-03-27 Thread Janning Vygen
Hi,

I am running postgresql-9.1 from debian backport package
fsync=on
full_page_writes=off
I didn't had any power failures on this server.

Now I got this:

1. Logfile PANIC

postgres[27352]: [4-1] PANIC:  corrupted item pointer: offset = 21248,
size = 16
postgres[27352]: [4-2] STATEMENT:  insert into RankingEntry
(rankingentry_mitglied_name, rankingentry_spieltagspunkte,
rankingentry_gesamtpunkte, rankingentry_spieltagssiege,
rankingentry_spieltagssieger, tippspieltag_id, mitglied_id) values ($1,
$2, $3, $4, $5, $6, $7)
postgres[26286]: [2-1] LOG:  server process (PID 27352) was terminated
by signal 6: Aborted
postgres[26286]: [3-1] LOG:  terminating any other active server processes

2. All my database connections are closed after this log entry

3. My Application is throwing lots of java.io.EOFException because of this.

Sometimes i get exactly the same behaviour but without no.1. So there is
no PANIC logged but all connections are closed suddenly with an EOFException

I searched the archive and found
http://archives.postgresql.org/pgsql-general/2007-06/msg01268.php

So I first reindexed all indexes on table rankingentry concurrently
and replaced the old ones. No errors.

Then I run VACUUM rankingentry and i got:
kicktipp=# VACUUM rankingentry ;
WARNING: relation rankingentry page 424147 is uninitialized --- fixing
WARNING: relation rankingentry page 424154 is uninitialized --- fixing
WARNING: relation rankingentry page 424155 is uninitialized --- fixing
WARNING: relation rankingentry page 424166 is uninitialized --- fixing
WARNING: relation rankingentry page 424167 is uninitialized --- fixing
WARNING: relation rankingentry page 424180 is uninitialized --- fixing
VACUUM
Time: 138736.347 ms

Now I restarted my process which issued the insert statement which
caused the server panic. Everything runs fine now.

I am worried because i never had any error like this with postgresql. I
just switched to 9.1 and started to have a hot standby server (WAL
shipping). Does this error has any relation to this?

Should I check or exchange my hardware? Is it a hardware problem?

Should I still worry about it?

regards
Janning



-- 
Kicktipp GmbH

Venloer Straße 8, 40477 Düsseldorf
Sitz der Gesellschaft: Düsseldorf
Geschäftsführung: Janning Vygen
Handelsregister Düsseldorf: HRB 55639

http://www.kicktipp.de/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Facing error while restoring the database

2012-03-27 Thread Alban Hertroys
On 27 March 2012 11:33, Akshay Joshi akshay.jo...@enterprisedb.com wrote:
 Hi

 I am facing issue while restoring the database. I have taken the backup of
 my database using pg_dump and then create new database and try to restore it
 using pg_restore. I am using PostgreSQL 9.0.

What is the error?

It looks like you are trying to use pg_restore on a plain text dump.
In that case, use psql to restore instead.

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Facing error while restoring the database

2012-03-27 Thread Akshay Joshi
On Tue, Mar 27, 2012 at 6:15 PM, Alban Hertroys haram...@gmail.com wrote:

 On 27 March 2012 11:33, Akshay Joshi akshay.jo...@enterprisedb.com
 wrote:
  Hi
 
  I am facing issue while restoring the database. I have taken the backup
 of
  my database using pg_dump and then create new database and try to
 restore it
  using pg_restore. I am using PostgreSQL 9.0.

 What is the error?


   pg_restore: restoring data for table sample
   pg_restore: [archiver (db)] Error while PROCESSING TOC:
   pg_restore: [archiver (db)] Error from TOC entry 1800; 0 54031 TABLE
DATA sample postgres
   pg_restore: [archiver (db)] COPY failed for table sample: ERROR:  new
row for relation sample violates check constraint
check_params_array_size
   CONTEXT:  COPY sample, line 1: 1 2 {}
   pg_restore: restoring data for table sample_one

  As per the above log, it is clear that restoring of sample table start
before sample_one and due to that my check_params_array_size constraint
gets failed.



 It looks like you are trying to use pg_restore on a plain text dump.
 In that case, use psql to restore instead.


  No, I have used Tar format. I have taken the backup using following
command
  pg_dump.exe --host localhost --port 5433 --username postgres
--no-password  --verbose --file C:\Users\Akshay\Desktop\sample_tar.backup
Test_Backup

 And restore it on new database Test_Restore using following command
 pg_restore.exe --host localhost --port 5433 --username postgres --dbname
Test_Restore --no-password  --verbose
C:\Users\Akshay\Desktop\sample_tar.backup




 --
 If you can't see the forest for the trees,
 Cut the trees and you'll see there is no forest.




-- 
*Akshay Joshi
Senior Software Engineer
EnterpriseDB Corporation
The Enterprise PostgreSQL Company
Phone: +91 20-3058-9522
Mobile: +91 976-788-8246*


[GENERAL] How can I modify a row in a function such that the caller sees it?

2012-03-27 Thread Kenneth Tilton
Bit of a nooby Q, tho I have researched this quite a bit and found nothing
and it seems simple: I just want to modify a row in a plpgsql function such
that the change can be seen by the caller.

The functions happen to be called in a before trigger, to finish
initializing the row. The functions are named in a column of the new row,
so they are effectively callbacks.

The closest I have come was an error saying the function returned nine
columns, which is great because the row has nine columns. :)

Somewhat condensed (I know, I know g) Here is the calling trigger
function:

CREATE OR REPLACE FUNCTION dcm.biz_proc_init()
  RETURNS trigger AS
$BODY$

declare
   bpnew dcm.biz_proc;
begin
   if NEW.timing_initializer is not null then
   execute 'select ' || NEW.timing_initializer || '($1)'
 using NEW
 into bpnew;
   return bpnew;
   else
   return NEW;
   end if;
end;
$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;

Here is a callback:

CREATE OR REPLACE FUNCTION dcm.test_bp_init(bp dcm.biz_proc)
  RETURNS dcm.biz_proc AS
$BODY$

declare
begin
   bp.max_duration_time = now() + interval '30 days';
   return bp;
end;
$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;

If the above looks OK I will carve out an actual disclosable pile of SQL to
recreate the problem, but this seems like a trivial thing so I am hoping I
just have missed the right bit of documentation of something
straightforward.

-ken


Re: [GENERAL] Facing error while restoring the database

2012-03-27 Thread Alban Hertroys
On 27 March 2012 15:12, Akshay Joshi akshay.jo...@enterprisedb.com wrote:
 On Tue, Mar 27, 2012 at 6:15 PM, Alban Hertroys haram...@gmail.com wrote:

 On 27 March 2012 11:33, Akshay Joshi akshay.jo...@enterprisedb.com
 wrote:
    pg_restore: restoring data for table sample
    pg_restore: [archiver (db)] Error while PROCESSING TOC:
    pg_restore: [archiver (db)] Error from TOC entry 1800; 0 54031 TABLE DATA
 sample postgres
    pg_restore: [archiver (db)] COPY failed for table sample: ERROR:  new
 row for relation sample violates check constraint
 check_params_array_size
    CONTEXT:  COPY sample, line 1: 1 2 {}
    pg_restore: restoring data for table sample_one

   As per the above log, it is clear that restoring of sample table start
 before sample_one and due to that my check_params_array_size constraint
 gets failed.

Aha, it's a check constraint! Those don't cover multiple tables, so
the order in which tables with check constraints get restored should
not matter.

In your case it apparently does matter, which seems to indicate that
you did something to circumvent that limitation and are now running
into problems because of that.

What's the definition of that constraint?

Perhaps a foreign key constraint would server you? Adding one should
at least make sure your database gets dumped in the correct order.
-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Facing error while restoring the database

2012-03-27 Thread Albe Laurenz
Akshay Joshi wrote:
 I am facing issue while restoring the database. I have taken the
backup of my database using pg_dump
 and then create new database and try to restore it using pg_restore. I
am using PostgreSQL 9.0.
 I have created one small test case to reproduce the issue, attached is
the sql file.
 
 Steps to reproduce:-
 
 1.Create new database 'Test_Backup' and run the attached .sql
file.
 2.Take backup of the above mentioned database using pg_dump
 3.Create new database 'Test_Restore, and try to restore using
pg_restore.
 
 As per my understanding, problem is with the pg_dump, need
confirmation. After analyzing the dump
 file(attached with the mail) it seems that COPY sample statement comes
before COPY sample_one which is
 wrong because test.sample.ref_id is the REFERENCES of
test.sample_one(id) table and I have created one
 constraint on sample table which is getting failed during restore.
 
 One thing that I found is if we rename the 'sample' table to 'sample2'
and 'sample_one' table to
 'sample1', bug is not reproducible. So it seems that pg_dump take the
backup sequential rather
 resolving the dependencies.
 
 Is there any way to solve this issue without renaming the tables?
Please help.

I can reproduce the error with 9.1.

pg_dump does not resolve dependencies, it avoids problems by adding
constraints
after inserting the data.

It seems that this is not done for CHECK constraints, however - they are
added
when the table is defined.

I think that this is a bug.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Facing error while restoring the database

2012-03-27 Thread Tom Lane
Albe Laurenz laurenz.a...@wien.gv.at writes:
 pg_dump does not resolve dependencies, it avoids problems by adding
 constraints after inserting the data.

 It seems that this is not done for CHECK constraints, however - they are
 added when the table is defined.

 I think that this is a bug.

It is not a bug; it is an unsafe and unsupported use of CHECK
constraints.

Using a CHECK to enforce a cross-row constraint is fundamentally broken,
because there is no way for the database to know that the constraint
might be violated after the *other* row is modified.  In the example
at hand, a change in sample_one.param_names could leave the constraint
unsatisfied for some rows in sample, but the database wouldn't detect
that.

I think the right fix here would be to redesign the table schema so that
the required cross-table constraint could be expressed as a foreign key.
We don't have enough context to guess at what a better design would
look like, though.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] windows 7 não funciona Adeus Postgresql

2012-03-27 Thread vossistemas
Não consegui fazer conexão no WINDOWS 7  de outro computador. Como não houve
retorno cheguei a conclusão que o POSTGRESQL NÃO FUNCIONA COM WINDOWS 7, com
isso vou ter que trocar de banco de dados. Estou me despedindo e agradecendo
aos poucos que tentaram me ajudar. Aos demais constatei que não é uma
comunidade muito solidária e portanto não vou mais fazer parte dela.
Adeus a todos e ao Postgresql.


--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/windows-7-nao-funciona-Adeus-Postgresql-tp5598024p5598024.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Facing error while restoring the database

2012-03-27 Thread Akshay Joshi
On Tue, Mar 27, 2012 at 7:40 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Albe Laurenz laurenz.a...@wien.gv.at writes:
  pg_dump does not resolve dependencies, it avoids problems by adding
  constraints after inserting the data.

  It seems that this is not done for CHECK constraints, however - they are
  added when the table is defined.

  I think that this is a bug.

 It is not a bug; it is an unsafe and unsupported use of CHECK
 constraints.

 Using a CHECK to enforce a cross-row constraint is fundamentally broken,
 because there is no way for the database to know that the constraint
 might be violated after the *other* row is modified.  In the example
 at hand, a change in sample_one.param_names could leave the constraint
 unsatisfied for some rows in sample, but the database wouldn't detect
 that.


  In my case I won't allow anyone to insert/modify the rows of sample_one
table. I have already inserted some rows in sample_one table where I
  want one constraint is number of array elements of sample_one.param_names
and sample.params must be same. That's why I have created
  CHECK constraint in sample table. User can insert, modify and delete the
rows of sample table, so I don't want any mismatch in the number of
  array elements of sample_one.param_names and sample.params table.


 I think the right fix here would be to redesign the table schema so that
 the required cross-table constraint could be expressed as a foreign key.
 We don't have enough context to guess at what a better design would
 look like, though.

regards, tom lane




-- 
*Akshay Joshi
Senior Software Engineer
EnterpriseDB Corporation
The Enterprise PostgreSQL Company
Phone: +91 20-3058-9522
Mobile: +91 976-788-8246*


Re: [GENERAL] Facing error while restoring the database

2012-03-27 Thread Alan Hodgson
On Tuesday, March 27, 2012 08:20:23 PM Akshay Joshi wrote:
   In my case I won't allow anyone to insert/modify the rows of sample_one
 table. I have already inserted some rows in sample_one table where I
   want one constraint is number of array elements of sample_one.param_names
 and sample.params must be same. That's why I have created
   CHECK constraint in sample table. User can insert, modify and delete the
 rows of sample table, so I don't want any mismatch in the number of
   array elements of sample_one.param_names and sample.params table.
 

Well you have a couple of options.

1) Redesign to use a trigger instead of a check constraint.

2) During restore, you can use a custom table of contents extracted from the 
dump, remove the constraint from that, do your restore, and then create the 
constraint manually afterwards. This requires that you use the custom dump 
format and pg_restore, of course.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] configuring RAID10 for data in Amazon EC2 cloud?

2012-03-27 Thread Welty, Richard
does anyone have any tips on this? Linux Software Raid doesn't seem to be doing 
a very good job here, but i may well have missed something.

i did a fairly naive setup using linux software raid on an amazon linux 
instance,
10 volumes (8G each), (WAL on a separate EBS volume) with the following setup:

mdadm -v --create /dev/md1 --level=raid10 --raid-devices=10 /dev/xvdg /dev/xvdh 
/dev/xvdi /dev/xvdj /dev/xvdk /dev/xvdl /dev/xvdm /dev/xvdn /dev/xvdo /dev/xvdp

pvcreate /dev/md1

vgcreate vg-pgdata /dev/md1

vgdisplay vg-pgdata

lvcreate -L39.98g -nlv-pgdata vg-pgdata


this particular instance is running about a factor of two slower than a simple 
single disk instance. both the single disk instance and the one with RAID10 for 
~postgres/data/base
started from amazon m1.xlarge instances.

postgresql version is 8.4.9, using a simple pgbench test for 600 seconds; the 
single disk instance shows this:

dbDev, single disk, shared_buffers=4GB, effective_cache_size=8GB
   disk mounted noatime, readahead 4096, other stuff default

-bash-4.1$ pgbench -T 600 bench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 1
duration: 600 s
number of transactions actually processed: 535018
tps = 891.696072 (including connections establishing)
tps = 891.704512 (excluding connections establishing)


and the RAID10 instance shows this:

dbQA, wal+raid10 setup, ext3 for WAL, ext4 for raid10,
  shared_buffers=2GB, effective_cache_size=3GB
  readahead 10240, walraid mount noatime, journal=ordered
  vm.swappiness=0,vm.overcommit_memory=2, dirty_ratio=2,
  dirty_background_ratio=1

starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 1
duration: 600 s
number of transactions actually processed: 261513
tps = 435.854738 (including connections establishing)
tps = 435.858853 (excluding connections establishing)



Re: [GENERAL] configuring RAID10 for data in Amazon EC2 cloud?

2012-03-27 Thread Ben Chobot
On Mar 27, 2012, at 8:25 AM, Welty, Richard wrote:

 does anyone have any tips on this? Linux Software Raid doesn't seem to be 
 doing a very good job here, but i may well have missed something.
 
iostat -x 5 is your friend. We've been struggling with a similar setup 
recently, and the TL;DR summary is that EBS has unreliable performance and 
isn't acceptable to use when your performance matters. When it's rocking, a 
single EBS volume can get you ~1200 IOPs, but far too often, a volume will drop 
to less than 100 IOPS. And then there are the occasional times when they almost 
lock up, but not entirely, so they lock up your raid but don't get 
automatically dropped (though you could drop them yourself).

When you have an 8-volume raid, you have 8x the exposure to these problems. 
We're coming to the realization that AWS has no real way to run a normal, 
non-memory resident database, and are looking to host our databases outside AWS 
using DirectConnect, or something similar. (And not to hijack this thread but 
if anybody has experiences with that, I'd love to hear them.)

Re: [GENERAL] configuring RAID10 for data in Amazon EC2 cloud?

2012-03-27 Thread Frank Lanitz
On Tue, 27 Mar 2012 11:25:53 -0400
Welty, Richard rwe...@ltionline.com wrote:

 does anyone have any tips on this? Linux Software Raid doesn't seem
 to be doing a very good job here, but i may well have missed
 something.
 
 i did a fairly naive setup using linux software raid on an amazon
 linux instance, 10 volumes (8G each), (WAL on a separate EBS volume)
 with the following setup:
 

You might want to check with Amazon here. 

Cheers, 
Frank 

-- 
Frank Lanitz fr...@frank.uvena.de


pgpmHnneAclhe.pgp
Description: PGP signature


[GENERAL] Multiple Slave Failover with PITR

2012-03-27 Thread Ken Brush
Hello everyone,

I notice that the documentation at:
http://wiki.postgresql.org/wiki/Binary_Replication_Tutorial

Doesn't contain steps in a Multiple Slave setup for re-establishing
them after a slave has become the new master.

Based on the documentation, here are the most fail-proof steps I came up with:

1. Master dies :(
2. Touch the trigger file on the most caught up slave.
3. Slave is now the new master :)
4. use pg_basebackup or other binary replication trick (rsync, tar
over ssh, etc...) to bring the other slaves up to speed with the new
master.
5. start the other slaves pointing to the new master.

But, that can take time (about 1-2 hours) with my medium sized DB
(580GB currently).

After testing a few different ideas that I gleaned from posts on the
mail list, I came up with this alternative method:

1. Master dies :(
2. Touch the trigger file on the most caught up slave
3. Slave is now the new master.
4. On the other slaves do the following:
5. Shutdown postgres on the slave
6. Delete every file in /data/pgsql/data/pg_xlog
7. Modify the recovery.conf file to point to the new master and
include the line recovery_target_timeline='latest'
8. Copy the history file from the new master to the slave (it's the
most recent #.history file in the xlog directory)
9. Startup postgres on the slave and watch it sync up to the new
master (about 1-5 minutes usually)

My question is this. Is the alternative method adequate? I tested it a
bit and couldn't find any problems with data loss or inconsistency.

I still use the fail-proof method above to re-incorporate the old
master as a new slave.

Sincerely,
-Ken

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Valid query times out when run from bash script

2012-03-27 Thread W. David Jarvis
Hello all -

I've been trying to get a bash script set-up that creates a DB, does a
join, writes the join to file, and then cleans itself up afterwards.
Everything within the script runs fine when entered at an actual
prompt, but when I run the script the SELECT query runs indefinitely.
I'm not sure what the cause is—the script is being run on a box
connecting to its own local postgres installation, so it can't be a
connection issue. Any thoughts?

The bash script is included below; postgres version is 9.0.4. Any help
very much appreciated :)

#!/bin/bash

: ${1?ERROR: Incorrect number of arguments (files have not been
properly specified). Proper format is 'mta-join old_mta_file
new_mta_file desired_output_filename.'}
: ${2?ERROR: Incorrect number of arguments (files have not been
properly specified). Proper format is 'mta-join old_mta_file
new_mta_file desired_output_filename.'}
: ${3?ERROR: Incorrect number of arguments (files have not been
properly specified). Proper format is 'mta-join old_mta_file
new_mta_file desired_output_filename.'}

OLD_MTA_FILE=$1
NEW_MTA_FILE=$2
OUTPUT_FILE=$3

# Figure out how many days we need columns for
export NEW_MTA_COLWIDTH=`head -1 $NEW_MTA_FILE | grep ',' -o | wc -l`
NEW_MTA_COLWIDTH=($NEW_MTA_COLWIDTH-7)/4

# Assemble the extra SQL for the above
ADDITIONAL_CREATION_FIELDS=
ADDITIONAL_SELECTION_FIELDS=
for (( c=0; c=$NEW_MTA_COLWIDTH; c++ ))
do
        ADDITIONAL_CREATION_FIELDS=$ADDITIONAL_CREATION_FIELDSDAY_$c
varchar(255), event_count_$c numeric(20,10), conversions_$c
numeric(20,10), revenue_$c numeric(20,10), 
        
ADDITIONAL_SELECTION_FIELDS=$ADDITIONAL_SELECTION_FIELDSnew_mta.DAY_$c,
new_mta.event_count_$c, new_mta.conversions_$c,
new_mta.revenue_$c, 
done

# Let's get rid of that extra comma at the end.

ADDITIONAL_CREATION_FIELDS=${ADDITIONAL_CREATION_FIELDS:0:${#ADDITIONAL_CREATION_FIELDS}-2}
ADDITIONAL_SELECTION_FIELDS=${ADDITIONAL_SELECTION_FIELDS:0:${#ADDITIONAL_SELECTION_FIELDS}-2}

echo -n Creating database mta-join...
createdb mta-join
echo -e Done!\n

echo Creating table new_mta...
# PSQL commands start here
psql mta-joinEOF
CREATE TABLE new_mta (
report_date             date,
campaign_id             integer,
tracking_campaign_id    integer,
placement_id            integer,
creative_id             integer,
package_id              integer,
audience_id             integer,
$ADDITIONAL_CREATION_FIELDS);
EOF
echo -e Done!\n

echo -n Inserting new mta file into new_mta table...
psql mta-joinEOF
\copy new_mta from '$NEW_MTA_FILE' with delimiter ','
\q
EOF
echo -e Done!\n

echo Creating table old_mta...
# PSQL commands start here
psql mta-joinEOF
CREATE TABLE old_mta (
report_date_day         date,
report_date_week        date,
report_date_month       date,
campaign_name           varchar(255),
package_name            varchar(255),
audience_name           varchar(255),
inventory_provider_name varchar(255),
placement_name          varchar(255),
creative_size           varchar(255),
creative_name           varchar(255),
impressions             bigint,
data_cost               numeric(20,10),
media_cost              numeric(20,10),
gross_cost              numeric(20,10),
clicks                  integer,
lta_click_actions       integer,
lta_view_actions        integer,
lta_click_revenue       integer,
lta_view_revenue        integer,
mta_actions             numeric(20,10),
mta_revenue             integer,
mta_action_count        integer,
mta_seconds             integer,
campaign_id             integer,
placement_id            bigint,
creative_id             bigint,
package_id              bigint,
audience_id             integer);
\q
EOF
echo -e Done!\n

# Upload old MTA file into table old_mta
echo -n Inserting old mta file into old_mta table...
psql mta-joinEOF
\COPY old_mta from '$OLD_MTA_FILE' with delimiter ',' CSV header;
EOF
echo -e Done!\n

# Create a bunch of indexes
echo -n Creating table indexes for faster querying...
psql mta-joinEOF
CREATE INDEX campaign_id_index_old ON old_mta (campaign_id);
CREATE INDEX campaign_id_index_new ON new_mta (campaign_id);
CREATE INDEX placement_id_index_old ON old_mta (placement_id);
CREATE INDEX placement_id_index_new ON new_mta (placement_id);
CREATE INDEX creative_id_index_old ON old_mta (creative_id);
CREATE INDEX creative_id_index_new ON new_mta (creative_id);
CREATE INDEX package_id_index_old ON old_mta (package_id);
CREATE INDEX package_id_index_new ON new_mta (package_id);
CREATE INDEX audience_id_index_old ON old_mta (audience_id);
CREATE INDEX audience_id_index_new ON old_mta (audience_id);
\q
EOF
echo -e Done!\n

echo Writing join to file...
psql -d mta-join -t -A -F ',' -c 'copy (SELECT * FROM new_mta, old_mta
WHERE report_date = report_date_day AND new_mta.campaign_id =
old_mta.campaign_id AND new_mta.placement_id = old_mta.placement_id
AND new_mta.creative_id = old_mta.creative_id AND new_mta.package_id =
old_mta.package_id AND new_mta.audience_id = old_mta.audience_id) to
stdout;'  $OUTPUT_FILE
echo -e 

Re: [GENERAL] Valid query times out when run from bash script

2012-03-27 Thread h...@101-factory.eu
run the script with bash -v or
-vvv for extra detailed vebose logging. 

see whats wrong, most of the times a
matter using the right closure of the statements with ' or  

Henk Bronk



On 27 mrt. 2012, at 20:37, W. David Jarvis william.d.jar...@gmail.com wrote:

 Hello all -
 
 I've been trying to get a bash script set-up that creates a DB, does a
 join, writes the join to file, and then cleans itself up afterwards.
 Everything within the script runs fine when entered at an actual
 prompt, but when I run the script the SELECT query runs indefinitely.
 I'm not sure what the cause is—the script is being run on a box
 connecting to its own local postgres installation, so it can't be a
 connection issue. Any thoughts?
 
 The bash script is included below; postgres version is 9.0.4. Any help
 very much appreciated :)
 
 #!/bin/bash
 
 : ${1?ERROR: Incorrect number of arguments (files have not been
 properly specified). Proper format is 'mta-join old_mta_file
 new_mta_file desired_output_filename.'}
 : ${2?ERROR: Incorrect number of arguments (files have not been
 properly specified). Proper format is 'mta-join old_mta_file
 new_mta_file desired_output_filename.'}
 : ${3?ERROR: Incorrect number of arguments (files have not been
 properly specified). Proper format is 'mta-join old_mta_file
 new_mta_file desired_output_filename.'}
 
 OLD_MTA_FILE=$1
 NEW_MTA_FILE=$2
 OUTPUT_FILE=$3
 
 # Figure out how many days we need columns for
 export NEW_MTA_COLWIDTH=`head -1 $NEW_MTA_FILE | grep ',' -o | wc -l`
 NEW_MTA_COLWIDTH=($NEW_MTA_COLWIDTH-7)/4
 
 # Assemble the extra SQL for the above
 ADDITIONAL_CREATION_FIELDS=
 ADDITIONAL_SELECTION_FIELDS=
 for (( c=0; c=$NEW_MTA_COLWIDTH; c++ ))
 do
 ADDITIONAL_CREATION_FIELDS=$ADDITIONAL_CREATION_FIELDSDAY_$c
 varchar(255), event_count_$c numeric(20,10), conversions_$c
 numeric(20,10), revenue_$c numeric(20,10), 
 
 ADDITIONAL_SELECTION_FIELDS=$ADDITIONAL_SELECTION_FIELDSnew_mta.DAY_$c,
 new_mta.event_count_$c, new_mta.conversions_$c,
 new_mta.revenue_$c, 
 done
 
 # Let's get rid of that extra comma at the end.
 
 ADDITIONAL_CREATION_FIELDS=${ADDITIONAL_CREATION_FIELDS:0:${#ADDITIONAL_CREATION_FIELDS}-2}
 ADDITIONAL_SELECTION_FIELDS=${ADDITIONAL_SELECTION_FIELDS:0:${#ADDITIONAL_SELECTION_FIELDS}-2}
 
 echo -n Creating database mta-join...
 createdb mta-join
 echo -e Done!\n
 
 echo Creating table new_mta...
 # PSQL commands start here
 psql mta-joinEOF
 CREATE TABLE new_mta (
 report_date date,
 campaign_id integer,
 tracking_campaign_idinteger,
 placement_idinteger,
 creative_id integer,
 package_id  integer,
 audience_id integer,
 $ADDITIONAL_CREATION_FIELDS);
 EOF
 echo -e Done!\n
 
 echo -n Inserting new mta file into new_mta table...
 psql mta-joinEOF
 \copy new_mta from '$NEW_MTA_FILE' with delimiter ','
 \q
 EOF
 echo -e Done!\n
 
 echo Creating table old_mta...
 # PSQL commands start here
 psql mta-joinEOF
 CREATE TABLE old_mta (
 report_date_day date,
 report_date_weekdate,
 report_date_month   date,
 campaign_name   varchar(255),
 package_namevarchar(255),
 audience_name   varchar(255),
 inventory_provider_name varchar(255),
 placement_name  varchar(255),
 creative_size   varchar(255),
 creative_name   varchar(255),
 impressions bigint,
 data_cost   numeric(20,10),
 media_cost  numeric(20,10),
 gross_cost  numeric(20,10),
 clicks  integer,
 lta_click_actions   integer,
 lta_view_actionsinteger,
 lta_click_revenue   integer,
 lta_view_revenueinteger,
 mta_actions numeric(20,10),
 mta_revenue integer,
 mta_action_countinteger,
 mta_seconds integer,
 campaign_id integer,
 placement_idbigint,
 creative_id bigint,
 package_id  bigint,
 audience_id integer);
 \q
 EOF
 echo -e Done!\n
 
 # Upload old MTA file into table old_mta
 echo -n Inserting old mta file into old_mta table...
 psql mta-joinEOF
 \COPY old_mta from '$OLD_MTA_FILE' with delimiter ',' CSV header;
 EOF
 echo -e Done!\n
 
 # Create a bunch of indexes
 echo -n Creating table indexes for faster querying...
 psql mta-joinEOF
 CREATE INDEX campaign_id_index_old ON old_mta (campaign_id);
 CREATE INDEX campaign_id_index_new ON new_mta (campaign_id);
 CREATE INDEX placement_id_index_old ON old_mta (placement_id);
 CREATE INDEX placement_id_index_new ON new_mta (placement_id);
 CREATE INDEX creative_id_index_old ON old_mta (creative_id);
 CREATE INDEX creative_id_index_new ON new_mta (creative_id);
 CREATE INDEX package_id_index_old ON old_mta (package_id);
 CREATE INDEX package_id_index_new ON new_mta (package_id);
 CREATE INDEX audience_id_index_old ON old_mta (audience_id);
 CREATE INDEX audience_id_index_new ON old_mta (audience_id);
 \q
 EOF
 echo -e Done!\n
 

Re: [GENERAL] Valid query times out when run from bash script

2012-03-27 Thread Steve Crawford

On 03/27/2012 11:37 AM, W. David Jarvis wrote:

Hello all -

I've been trying to get a bash script...but when I run the script the SELECT 
query runs indefinitely.

...
# Create a bunch of indexes
echo -n Creating table indexes for faster querying...
psql mta-joinEOF
CREATE INDEX campaign_id_index_old ON old_mta (campaign_id);
...
CREATE INDEX audience_id_index_new ON old_mta (audience_id);
...
psql -d mta-join -t -A -F ',' -c 'copy (SELECT * FROM new_mta, old_mta
WHERE report_date = report_date_day AND new_mta.campaign_id =
old_mta.campaign_id AND new_mta.placement_id = old_mta.placement_id
AND new_mta.creative_id = old_mta.creative_id AND new_mta.package_id =
old_mta.package_id AND new_mta.audience_id = old_mta.audience_id) to
stdout;'  $OUTPUT_FILE


Some things to try:

1. Change psql to echo and then run the statement that gets dumped 
out by hand. See what happens.


2. Change your psql command to be an EXPLAIN..., run the script and 
examine the output of the explain.


Some thoughts:

1. You immediately run a select after creating, importing and making a 
bunch of indexes. Is it possible that autovacuum hasn't yet analyzed the 
table and you are getting a bad query plan? If so (or to make sure it 
doesn't happen), update your script to include an ANALYZE of the 
appropriate tables before you run the SELECT.


2. Run analyze anyway. It is likely that most of the indexes you create 
are unused. Unless your data changes so much that the planner would 
choose different indexes for different imports, you can probably 
eliminate the steps of creating the unused indexes.


3. You can probably wrap all the steps into a single connection for a 
small speed improvement. Running everything within a single connection 
would allow you to use temporary tables which are unlogged. Alternately, 
since you are on 9.1, you could leave the script alone and create an 
unlogged table CREATE UNLOGGED TABLE Both temporary and unlogged 
tables don't write to the WAL so you get a speed improvement in return 
for the data being at risk in a crash. In this case, the database is 
just doing some temporary processing steps that are deleted anyway so 
you don't need crash safety.


Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Valid query times out when run from bash script

2012-03-27 Thread Raymond O'Donnell
On 27/03/2012 19:37, W. David Jarvis wrote:
 Hello all -
 
 I've been trying to get a bash script set-up that creates a DB, does a
 join, writes the join to file, and then cleans itself up afterwards.
 Everything within the script runs fine when entered at an actual
 prompt, but when I run the script the SELECT query runs indefinitely.
 I'm not sure what the cause is—the script is being run on a box
 connecting to its own local postgres installation, so it can't be a
 connection issue. Any thoughts?

Only a guess, but would the createdb line, or indeed any of the psql
invocations, be prompting for a password? - That would cause the script
to hang indefinitely.

Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Valid query times out when run from bash script

2012-03-27 Thread W. David Jarvis
All -

Thanks for the prompt responses.  I've actually narrowed it down a bit
and it seems that something earlier (specifically, the for loop that
assembles $ADDITIONAL_CREATION_FIELDS and
$ADDITIONAL_SELECTION_FIELDS) in the script is causing Bash to write
extremely slowly (if I subset the query out itself into an independent
Bash script it executes appropriately). Since this is primarily a Bash
issue rather than a psql issue I may take my search elsewhere.

 - D

On Tue, Mar 27, 2012 at 1:20 PM, Steve Crawford
scrawf...@pinpointresearch.com wrote:
 On 03/27/2012 11:37 AM, W. David Jarvis wrote:

 Hello all -

 I've been trying to get a bash script...but when I run the script the
 SELECT query runs indefinitely.

 ...

 # Create a bunch of indexes
 echo -n Creating table indexes for faster querying...
 psql mta-joinEOF
 CREATE INDEX campaign_id_index_old ON old_mta (campaign_id);
 ...

 CREATE INDEX audience_id_index_new ON old_mta (audience_id);
 ...

 psql -d mta-join -t -A -F ',' -c 'copy (SELECT * FROM new_mta, old_mta
 WHERE report_date = report_date_day AND new_mta.campaign_id =
 old_mta.campaign_id AND new_mta.placement_id = old_mta.placement_id
 AND new_mta.creative_id = old_mta.creative_id AND new_mta.package_id =
 old_mta.package_id AND new_mta.audience_id = old_mta.audience_id) to
 stdout;'  $OUTPUT_FILE

 Some things to try:

 1. Change psql to echo and then run the statement that gets dumped out
 by hand. See what happens.

 2. Change your psql command to be an EXPLAIN..., run the script and
 examine the output of the explain.

 Some thoughts:

 1. You immediately run a select after creating, importing and making a bunch
 of indexes. Is it possible that autovacuum hasn't yet analyzed the table and
 you are getting a bad query plan? If so (or to make sure it doesn't happen),
 update your script to include an ANALYZE of the appropriate tables before
 you run the SELECT.

 2. Run analyze anyway. It is likely that most of the indexes you create are
 unused. Unless your data changes so much that the planner would choose
 different indexes for different imports, you can probably eliminate the
 steps of creating the unused indexes.

 3. You can probably wrap all the steps into a single connection for a small
 speed improvement. Running everything within a single connection would allow
 you to use temporary tables which are unlogged. Alternately, since you are
 on 9.1, you could leave the script alone and create an unlogged table
 CREATE UNLOGGED TABLE Both temporary and unlogged tables don't write
 to the WAL so you get a speed improvement in return for the data being at
 risk in a crash. In this case, the database is just doing some temporary
 processing steps that are deleted anyway so you don't need crash safety.

 Cheers,
 Steve




-- 
W. David Jarvis
M: 203.918.2328

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Valid query times out when run from bash script

2012-03-27 Thread W. David Jarvis
No prompt, seems to be using .pgpass without a problem.

On Tue, Mar 27, 2012 at 1:59 PM, Raymond O'Donnell r...@iol.ie wrote:
 On 27/03/2012 19:37, W. David Jarvis wrote:
 Hello all -

 I've been trying to get a bash script set-up that creates a DB, does a
 join, writes the join to file, and then cleans itself up afterwards.
 Everything within the script runs fine when entered at an actual
 prompt, but when I run the script the SELECT query runs indefinitely.
 I'm not sure what the cause is—the script is being run on a box
 connecting to its own local postgres installation, so it can't be a
 connection issue. Any thoughts?

 Only a guess, but would the createdb line, or indeed any of the psql
 invocations, be prompting for a password? - That would cause the script
 to hang indefinitely.

 Ray.


 --
 Raymond O'Donnell :: Galway :: Ireland
 r...@iol.ie



-- 
W. David Jarvis
M: 203.918.2328

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] create one function to let other user execute vacuum command. got such an error.

2012-03-27 Thread leaf_yxj
create one function to let other user execute vacuum command. got such an
error. Please help. Thanks. Regards . Grace

rrp= create function vacuum_f ( tablename char(100))  
 Returns char(100) AS $$
   
vacuum tablename; 
$$ Language plpgsql security definer;
ERROR:  syntax error at or near vacuum
LINE 3:   vacuum tablename;
  ^
rrp= 

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/create-one-function-to-let-other-user-execute-vacuum-command-got-such-an-error-tp5599318p5599318.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql commit.

2012-03-27 Thread leaf_yxj

Hi Andrew,
 
Thanks. I got it.
 
Regards.
 
Grace
At 2012-03-27 12:36:18,Andrew Sullivan-8 [via PostgreSQL] 
ml-node+s1045698n559693...@n5.nabble.com wrote:
On Mon, Mar 26, 2012 at 06:22:20PM -0700, leaf_yxj wrote:
 Oracle database, the other user can't see the uncommited data,  for
 postgresql 8.2 , it looks like once you issue insert , the other user will
 automatically see new record. was it wrong? I am pretty new to postgresql?

Did you actually start a transaction (i.e. issue BEGIN)?  If not, your
statement COMMITs automatically.  

Also, you might want to note the remark on the PostgreSQL security
pages: Please note that versions prior to 8.3 are no longer supported
and vulnerabilities for these versions may not be included in this
list. New vulnerabilities in these versions are no longer patched.
See http://www.postgresql.org/support/security/.  I'd plan to upgrade
soon.

Best,

A

--
Andrew Sullivan
[hidden email]

--
Sent via pgsql-general mailing list ([hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



If you reply to this email, your message will be added to the discussion below:
http://postgresql.1045698.n5.nabble.com/postgresql-commit-tp5596729p5596931.html
To unsubscribe from postgresql commit., click here.
NAML

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/postgresql-commit-tp5596729p5599238.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.