[ADMIN] How to make the row changes inside trigger function visible to the top level sql statement?

2012-08-08 Thread haifeng liu
Hello,

I am doing table partitioning, all is ok except that after executing 'insert' 
sql statement I can't get affected rows, it always be 0. After searching on the 
documents, I found that row changes inside trigger function is not visible to 
the top level statement.

Partition table using a trigger function to redirect insertion to the correct 
partition is the recommend way, and test affected rows is also used frequently. 
pgAdmin do test the affected rows too, thus when I type a new row and click 
save button, it seems failed but actually succeed.

How can I make the row changes inside the trigger function visible to the top 
level statement?

Thank all in advance!
-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Data migration to sql server 2008

2012-08-08 Thread karthi keyan
hello admin,
I am using Postgresql for my application development, which is very robust
and secure to use.

For some interlink application purpose , i need to *migrate data into Sql
server 2008*.
So please refer me, or give some samples, how i can migrate the data

Regards
Karthik


Re: [ADMIN] Data migration to sql server 2008

2012-08-08 Thread Craig Ringer

On 08/07/2012 07:06 PM, karthi keyan wrote:

For some interlink application purpose , i need to *migrate data into 
Sql server 2008*.


The simplest way is usually to connect with psql and export CSV data with:

\copy (SELECT ) to '/path/to/file.csv' csv

or for a whole table:

\copy tablename to '/path/to/file.csv' csv

For usage of COPY see:

\h COPY

and http://www.postgresql.org/docs/current/static/sql-copy.html


... then load that CSV into MS SQL Server 2008 like you would anything else.


A more complicated but more powerful way is to use an ETL tool like 
Pentaho or Talend to connect to both databases and merge/convert data.


--
Craig Ringer


[ADMIN] Streaming replication failover with 3 servers

2012-08-08 Thread Stuart Bishop
Hi.

I'm trying to work out failover and disaster recovery procedures for a
cluster of three servers. Streaming replication is being used with a high
wal_keep_segments, no log shipping is happening. I need to avoid the
several hours it takes to rebuild a hot standby from scratch.

ServerA is the master.
ServerB is a streaming hot standby and prefered failover server.
ServerC is a streaming hot standby.


For a planned failover, maintenance on ServerA:

1. Shutdown ServerB  ServerC
2. Shutdown ServerA
3. Copy pg_xlog from ServerA to ServerB and ServerC
4. Reconfigure ServerB as master, start it up.
5. Reconfigure ServerC as streaming hot standby of ServerB. Start it.
6. After maintenance, reconfigure ServerA as streaming hot standby
   of ServerB. Start it.

For an unplanned failover, ServerA has exploded:

1. Run 'SELECT pg_last_xlog_receive_location()' on ServerB and ServerC,
   determining which is most up to date.
2. Shutdown ServerB and ServerC
3. If ServerC is more up to date, copy pg_xlog from ServerC to ServerB.
4. Reconfigure ServerB as master, start it up.
5. Reconfigure ServerC as streaming hot standby of ServerB, start it up.


Does this look correct to people?

Am I going to end up in trouble copying files into pg_xlog like this on a
busy system?

Is it overengineered? eg. will a master ensure everything is streamed to
connected hot standbys before a graceful shutdown?

-- 
Stuart Bishop stu...@stuartbishop.net
http://www.stuartbishop.net/

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] standby with a fixed lag behind the master

2012-08-08 Thread Amador Alvarez

Hi there,

And why not shipping older WAL files to the target on a regular basis ?.
On the master you can control with a crontab job to ship the wanted WAL 
files (n hours older than current time and clean the shipped up, check 
rsync options up) in a regular basis.


A.A.



On 07/26/2012 02:24 AM, Alexey Klyukin wrote:

Hello,

I've recently come across the task of setting up a PostgreSQL 9.1 standby 
server that is N hours behind the master, i.e. only transactions that finished 
N hours in the past or older should be replayed on a standby. The goal is to 
have a  known good state server to perform backups from and possibly revert to 
in case of unwanted changes on primary. It seems that there is no mechanism in 
PostgreSQL to just ask the standby to keep a fixed distance (in terms of either 
WAL segments or time) between the primary, so these are possible solutions:

1. Use restore command on standby to fetch the current WAL segment only if it 
has been created not less than N hours in the past (according to ctime).
2. Pause the restore process on standby if the lag * is less than N hours (with 
pg_xlog_replay_pause()) and resume if it is more than that.
3. Set recovery_target_time to current - 6 hours and pause_at_recovery_target 
to true, periodically check whether the recovery is paused, reset the recovery 
target time to a new value (and restart the standby) if it is.

* - the lag would be calculated as now() - pg_last_xact_replay_timestamp() on 
standby.

Both 2 and 3 requires external cron job to pause/resume the recovery, and 1, 
while being the easiest of all, doesn't work with SR (unless it's combined with 
WAL shipping). I wonder if there are other well established approaches at 
solving this problem and if there is an interest for adding such feature to the 
-core?

Thank you,
--
Alexey Klyukinhttp://www.commandprompt.com
The PostgreSQL Company – Command Prompt, Inc.







--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PostgreSQL oom_adj postmaster process to -17

2012-08-08 Thread Radovan Jablonovsky
Currently there are maximum 600 connections.

On Fri, Aug 3, 2012 at 2:05 PM, Scott Marlowe scott.marl...@gmail.comwrote:

 On Fri, Aug 3, 2012 at 12:08 PM, Radovan Jablonovsky
 radovan.jablonov...@replicon.com wrote:
  Thanks you for your response.
 
  Database config:
  shared_buffers = 8GB
  temp_buffers = 32MB
  work_mem = 64MB
  maintenance_work_mem = 512MB
  effective_cache_size = 16GB
 
  In usual load there are not much pressure on memory, but it is possible
 to
  have all clients start using heavy reports. They are valid requests and
  could consume all memory. In this border and not likely but possible
  scenario it could be useful to let OOM killer to kill client's
  processes/connections but leave PostgreSQL system processes (postmaster,
  writer, stat, log, streaming, ...) excluded  from reach of OOM killer.

 You're only realistic solution is to either limit the incoming
 connections via a connection pooler like pgbouncer or to lower your
 work_mem to something smaller. What's you're current max connections
 setting?




-- 

*Radovan Jablonovsky* | SaaS DBA | Phone 1-403-262-6519 (ext. 256) | Fax
1-403-233-8046


*

Replicon | Hassle-Free Time  Expense Management Software - 7,300 Customers
- 70 Countries
www.replicon.com | facebook http://www.facebook.com/Replicon.inc |
twitterhttp://twitter.com/Replicon
 | blog http://www.replicon.com/blog/ | contact
ushttp://www.replicon.com/about_replicon/contact_us.aspx

We are hiring! | search
jobshttp://tbe.taleo.net/NA2/ats/careers/searchResults.jsp?org=REPLICONcws=1act=sortsortColumn=1__utma=1.651918544.1299001662.1299170819.1299174966.10__utmb=1.8.10.1299174966__utmc=1__utmx=-__utmz=1.1299174985.10.3.utmcsr=google%7Cutmccn=(organic)%7Cutmcmd=organic%7Cutmctr=replicon%20careers__utmv=1.%7C3=Visitor%20Type=Prospects=1,__utmk=40578466
*


Re: [ADMIN] PostgreSQL oom_adj postmaster process to -17

2012-08-08 Thread Kevin Grittner
Radovan Jablonovsky radovan.jablonov...@replicon.com wrote:
 
 PostgreSQL version 9.1.1 with 32GB of RAM
 
 shared_buffers = 8GB
 temp_buffers = 32MB
 work_mem = 64MB
 maintenance_work_mem = 512MB
 
 Currently there are maximum 600 connections.
 
Please read:
 
http://wiki.postgresql.org/wiki/Number_Of_Database_Connections
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PostgreSQL oom_adj postmaster process to -17

2012-08-08 Thread Craig Ringer

On 08/09/2012 04:24 AM, Kevin Grittner wrote:

http://wiki.postgresql.org/wiki/Number_Of_Database_Connections
  

Can we please please PLEASE link to that as a comment above max_connections?

Last time this came up nobody was happy with wording of a comment so 
nothing got done. It's a real usability wart - causing real-world 
performance and reliability problems - that people unwittingly raise 
max_connections to absurd levels because they get no warnings, hints or 
guidance of any sort.


--
Craig Ringer

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin