Re: [ADMIN] Schema diagramming tool?

2012-09-04 Thread Bob Lunney
SchemaSpy is kind of handy. 

Bob

Sent from my iPhone

On Sep 4, 2012, at 11:35 AM, Craig James cja...@emolecules.com wrote:

 Can anyone recommend a good tool for producing a good drawing of an existing 
 database schema?  I don't need a design tool, but rather one that can take an 
 existing schema and produce a nice diagram that can be further edited and 
 beautified.  I want something I can print and hang on the wall as a 
 reference document for developers writing code and SQL.
 
 Thanks,
 Craig
 


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] revoked permissions on table still allows users to see table's structure

2011-07-22 Thread Bob Lunney
Juan,

That is what schemas, permissions and search paths are for.  You create 
multiple schemas, put the tables in the appropriate ones, grant usage 
permissions to those users that need access to the schemas and set the search 
path to search the schemas for objects.  Below is the test case.  It helps if 
you reset the psql prompt to display the current user:

\set PROMPT1 '%m:%:%n:%/:%R%x%# '

As the database owner:

create schema seethat;
create schema seewhat;
create user al_low;
create user dee_ny;
grant usage on schema seethat to al_low, dee_ny;
grant usage on schema seewhat to al_low;
set search_path to seethat, seewhat, public;

create table seethat.open(open_id int);
create table seewhat.closed(closed_id int);


set session authorization al_low;

\d


set session authorization dee_ny;

\d


Hope that helps!

Bob Lunney

- Original Message -
From: Juan Cuervo (Quality Telecom) juanrcue...@quality-telecom.net
To: Scott Marlowe scott.marl...@gmail.com
Cc: pgsql-admin@postgresql.org
Sent: Friday, July 22, 2011 8:24 AM
Subject: Re: [ADMIN] revoked permissions on table still allows users to see 
table's structure

Hi Scott

Thanks for your answer.

It should be a way to prevent this from normal users who only need 
access to a set of tables, a view or even a store procedure. (Maybe a 
VIEW_SCHEMA privilege of roles?). View a table's structure should only 
be allowed to users who has at least one privilege on the table.

It doesnt make much sense to me that every user with access to the 
database , would be able to see the whole database design.

Do you know if this is common in other RDBMS ?

Regards,

Juan R. Cuervo Soto
Quality Telecom Ltd
www.quality-telecom.net
PBX : (575) 3693300
CEL : (57)  301-4174865


El 21/07/2011 08:48 p.m., Scott Marlowe escribió:
 On Thu, Jul 21, 2011 at 6:08 PM, Juan Cuervo (Quality Telecom)
 juanrcue...@quality-telecom.net  wrote:
 Hi All

 I'm new to the list, but have a few years as postgres user. I want to share
 what I consider a rare behavior of postgresql regarding database object's
 premissions:

 I have noticed that there is no way (at least no one I know) to prevent a
 user from seeing the table's structures in a database.

 Is this a normal behavior of the product ?
 Yep.  Completely normal.

 Is there a way to prevent a user from seeing my table's, procedure's and
 function's code ?
 Don't let them connect to the db?  That's all I can think of.


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Problem retrieving large records (bytea) data from a table

2011-07-20 Thread Bob Lunney
PostgreSQL has to accumulate all the rows of a query before returning the 
result set to the client.  It is probably spooling those several 400-450 Mb 
docs, plus all the other attributes, to a temporary file prior to sending the 
results back.  If you have just three document stored in the database you're 
looking at  1 Gb for the spool file alone.

Remember, select * is a convenience.  You will probably get the same response 
time as before is you name the columns, except doc_data, in the select clause 
of your query.  See the 'extended' attribute of doc_data?  That means the bytea 
data is stored out-of-line from the other columns like id, create_date and by.  
 See http://www.postgresql.org/docs/9.0/interactive/storage-toast.html for the 
particulars of TOAST.

If you need to remove the doc data from the table a quick way to do that would 
be to either update the table and set doc_data to NULL, or use the create 
table as select (CTAS) syntax and specify NULL as the value for doc_date, then 
drop the original table and rename the new one to doc_table.  Note if you use 
the CTAS method you will have to alter the table afterwards to re-establish the 
not null and default attributes of each column.  Don't forget to recreate the 
primary key, too.

Finally, talk with the developers to see if the document data really needs to 
be in the database, or could just be in a file outside of the database.  If you 
need transactional semantics (ACID properties) to manage the documents you may 
be stuck.  If not, replace doc_data with doc_filename (or maybe file_n is that 
column already) and move on from there.

Good luck,

Bob Lunney



Στις Wednesday 20 July 2011 17:31:45 ο/η pasman pasmański έγραψε:
 You may do a backup of this table. Then with ultraedit search your
 documents and remove them.
 
 2011/7/5, jtke...@verizon.net jtke...@verizon.net:
  I am having a hang condition every time I try to retrieve a large
  records (bytea) data from  a table
  The OS is a 5.11 snv_134 i86pc i386 i86pc Solaris with 4GB memory
  running Postgresql 8.4.3 with a standard postgresql.conf file (nothing
  has been changed)
  I have the following table called doc_table
        Column  |              Type              |  Modifiers     |
  Storage  | Description
  ++---
   id           | numeric                        | not null    | main |
   file_n       | character varying(4000)        |             |
  extended |
   create_date  | timestamp(6) without time zone | not null
                  default (clock_timestamp())
                  ::timestamp(0)without time zone              | plain |
   desc         | character varying(4000)        |             |
  extended |
   doc_cc       | character varying(120)         | not null    |
  extended |
   by           | numeric                        | not null    | main |
   doc_data     | bytea                          |             |
  extended |
   mime_type_id | character varying(16)          | not null    |
  extended |
   doc_src      | text                           |             |
  extended |
   doc_stat     | character varying(512)         | not null
                 default 'ACTIVE'::character varying           |
  extended |
  Indexes:
     documents_pk PRIMARY KEY, btree (document_id)
 
 
  A while ago the some developers inserted several records with a
  document (stored in doc_Data) that was around 400 - 450 MB each. Now
  when you do a select * (all) from this table you get a hang and the
  system becomes unresponsive.  Prior to these inserts, a select * (all,
  no where clause) worked.  I'm also told a select * from doc_table
  where id = xxx still works.  I haven't seen any error message in the
  postgresql log files.
  So I'm not sure how to find these bad records and why I am getting a
  hang.  Since this postgresql is running with the default config files
  could I be running out of a resource?  If so I'm not sure how to or
  how much to add to these resources to fix this problem since I have
  very little memory on this system.  Does anyone have any ideas why I
  am getting a hang.  Thanks
 
  --
  Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-admin
 
 
 
 -- 
 
 pasman
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Parallel pg_dump on a single database

2011-07-01 Thread Bob Lunney
Tom,

Thanks for the response, but I figured out the error is mine, not pg_dump's.  
In short (to minimize my embarrassment!) don't write to the same file from 
three different pg_dumps. 

The good news is running multiple pg_dumps simultaneously on a single database 
with exclusive coverage of different table sets works great, and my overall 
dump times have been reduced to one-fifth the time it takes to run a single 
pg_dump.  

BTW, I'm using PG 8.4.1, going to 8.4.8 soon, and its working great.  Thanks to 
all for the excellent database software.


Regards,

Bob Lunney


From: Tom Lane t...@sss.pgh.pa.us
To: Bob Lunney bob_lun...@yahoo.com
Cc: pgsql-admin@postgresql.org pgsql-admin@postgresql.org
Sent: Friday, July 1, 2011 2:09 PM
Subject: Re: [ADMIN] Parallel pg_dump on a single database 

Bob Lunney bob_lun...@yahoo.com writes:
 Is it possible (or smart!) to run multiple pg_dumps simulataneously on a 
 single database, dumping different parts of the database to different files 
 by using table and schema exclusion?  I'm attempting this and sometimes it 
 works and sometimes when I check the dump files with 
   pg_restore -Fc dumpfile  /dev/null

 I get 

   pg_restore: [custom archiver] found unexpected block ID (4) when reading 
data -- expected 4238

That sure sounds like a bug.  What PG version are you using exactly?
Can you provide a more specific description of what you're doing,
so somebody else could reproduce this?

            regards, tom lane

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Parallel pg_dump on a single database

2011-06-24 Thread Bob Lunney
Is it possible (or smart!) to run multiple pg_dumps simulataneously on a single 
database, dumping different parts of the database to different files by using 
table and schema exclusion?  I'm attempting this and sometimes it works and 
sometimes when I check the dump files with 


  pg_restore -Fc dumpfile  /dev/null

I get 

  pg_restore: [custom archiver] found unexpected block ID (4) when reading data 
-- expected 4238

I suspect that locks are colliding sometimes and not others, but I'm not sure.  
Little help?  

Thanks in advance,

Bob Lunney

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] visualizing database schema - png/jpeg?

2011-05-19 Thread Bob Lunney
Schema Spy works for me.

Bob Lunney

--- On Wed, 5/18/11, neubyr neu...@gmail.com wrote:

 From: neubyr neu...@gmail.com
 Subject: [ADMIN] visualizing database schema - png/jpeg?
 To: pgsql-admin@postgresql.org
 Date: Wednesday, May 18, 2011, 3:09 PM
 Anyone knows of tools that can
 generate PNG/JPEG image from pgsql
 database schema?
 
 --
 neuby.r
 
 -- 
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin
 

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] best practice for moving millions of rows to child table when setting up partitioning?

2011-04-27 Thread Bob Lunney
Mark,

Comments inline below for items 2 and 3.  What you are planning should work 
like a charm.

Bob Lunney

--- On Wed, 4/27/11, Mark Stosberg m...@summersault.com wrote:

 From: Mark Stosberg m...@summersault.com
 Subject: [ADMIN] best practice for moving millions of rows to child table 
 when setting up partitioning?
 To: pgsql-admin@postgresql.org
 Date: Wednesday, April 27, 2011, 10:48 AM
 
 Hello,
 
 I'm working on moving a table with over 30 million to rows
 to be
 partitioned. The table seeing several inserts per second.
 It's
 essentially an activity log that only sees insert activity
 and is
 lightly used for reporting, such that queries against it
 can safely be
 disabled during a transition.
 
 I'm looking for recommendations for a way to do this that
 will be least
 disruptive to the flow of inserts statements that will
 continue to
 stream in.
 
 Here's the plan which seems best to me at the moment. Is it
 is
 reasonable?
 
 1. Handling creating the empty/future partitions is easy. I
 have the
 code for this done already, and will make several
 partitions in advance
 of needing them.
 
 2. To create the partitions that should have data moved
 from the parent,
 I'm thinking of creating them, and then before they are
 live,
 using INSERT ... SELECT to fill them with data from the
 parent table.
 I'll run the INSERT first, and then add their indexes.
 

Use create table as select ... (CTAS) instead of creating the table, then 
inserting.  Since the table is created and populated atomically there is no 
need to log the inserts in WAL, and the operation is much faster.

 3. I will then install the trigger to redirect the inserts
 to the child
 table.
 

If possible, its better to have the code simply do inserts directly into the 
child table - after all, if the partitioning is based on date, both the code 
and database know the date, so the code knows to which child table it should 
write at any given moment.

 4. There will still be a relatively small number of new
 rows from the
 parent table to be deal with that came in after the INSERT
 from #2 was
 started, so a final INSERT .. SELECT statement will be made
 to copy the
 remaining rows.
 
 5. Finally, I'll drop the indexes on the parent table and
 truncate it.
 
 Thanks for advice here. If there's a tutorial out there
 about this that
 I've missed, I'm happy to review it instead having it
 rehashed here.
 
 Thanks for the help!
 
     Mark
 
 
 -- 
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PKs without indexes

2011-04-19 Thread Bob Lunney

--- On Tue, 4/19/11, Jerry Sievers gsiever...@comcast.net wrote:

 From: Jerry Sievers gsiever...@comcast.net
 Subject: Re: [ADMIN] PKs without indexes
 To: jweatherma...@alumni.wfu.edu
 Cc: pgsql-admin@postgresql.org
 Date: Tuesday, April 19, 2011, 11:19 AM
 John P Weatherman jweatherma...@alumni.wfu.edu
 writes:
 
  Hi all,
 
  I am attempting to set up slony-i and have run into a
 minor
  glitch...apparently whoever designed the database I
 have inherited
  didn't consistently build indexes to go along with
 Primary Keys, or at
  least that's the error message I have been
 getting.  I am far from
  confident in my own sqlfu in the catalog tables. 
 Does anyone have a
  script for identifying tables without indexes that
 correspond to their
  PKs?  I'm just trying to avoid re-inventing the
 wheel if I can help it.
 
 Here's an example for you...
 
 begin;
 
 create schema foo;
 set search_path to foo;
 
 create table haspk (a int primary key);
 create table missingpk (a int);
 
 select relname
 from pg_class c
 join pg_namespace n on c.relnamespace = n.oid
 where nspname = 'foo'
 and relkind = 'r'
 and c.oid not in (
     select conrelid
     from pg_constraint
     where contype = 'p'
 );
 
 abort;
 
 HTH

Slony will use any unique index on a table for replication purposes, so the 
list of tables should come from:

select relname
  from pg_class c
  join pg_namespace n on c.relnamespace = n.oid
 where nspname = current_schema()
   and relkind = 'r'
   and c.oid not in (
   select indrelid
 from pg_index
where indisprimary or indisunique
)
 order by 1;

Bob Lunney



-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] grant select script

2011-03-30 Thread Bob Lunney
Marc,
Try pg_stat_user_tables - it will eliminate the tables in pg_catalog, 
information_schema, and the toast tables.
Bob Lunney

--- On Wed, 3/30/11, Marc Fromm marc.fr...@wwu.edu wrote:

From: Marc Fromm marc.fr...@wwu.edu
Subject: [ADMIN] grant select script
To: pgsql-admin@postgresql.org pgsql-admin@postgresql.org
Date: Wednesday, March 30, 2011, 1:54 PM



 
 


 

I am working with a script to automate grant select to all tables in a database 
to a user. 
  1 #!/bin/bash 
  2 for table in 'echo SELECT relname FROM pg_stat_all_tables; | psql cswe2 | 
grep -v pg_ | grep ^ '; 
  3 do 
  4 echo GRANT SELECT ON TABLE $table to tom; 
  5 echo GRANT SELECT ON TABLE $table to tom; | psql cswe2 
  6 done 
   
The script works—meaning it grants the select to the user, but it generates 
errors on tables that do not exist like the following. 
The data base cswe2 does not contain the table sql_languages, unless it is 
hidden. Is there a way to tell the script to ignore them? 
GRANT SELECT ON TABLE sql_languages to tom; 
ERROR:  relation sql_languages does not exist 
   
Thanks 
   
Marc
 
   
   
Marc Fromm

Information Technology Specialist II

Financial Aid Department

Western Washington University

Phone: 360-650-3351

Fax:   360-788-0251 

 




  

Re: [ADMIN] full vacuum of a very large table

2011-03-29 Thread Bob Lunney

--- On Tue, 3/29/11, Nic Chidu n...@chidu.net wrote:

 From: Nic Chidu n...@chidu.net
 Subject: [ADMIN] full vacuum of a very large table
 To: pgsql-admin@postgresql.org
 Date: Tuesday, March 29, 2011, 11:56 AM
 Got a situation where a 130 mil rows
 (137GB) table needs to be brought down in size to  10
 mil records (most recent)
 with the least amount of downtime. 
 
 Doing a full vacuum would be faster on:
  - 120 mil rows deleted and 10 mil active (delete most of
 them then full vacuum)
  - 10 mil deleted and 120 mil active. (delete small batches
 and full vacuum after each delete). 
 
 Any other suggestions?
 
 Thanks,
 
 Nic
 
 -- 

Nic,

Since you know the where clause to delete the 120 mil rows why not use the 
converse of that to select the 10 mil rows to retain into another table, then 
drop the original table?  No vacuum required!

Be sure to use the create table as select... syntax to avoid WAL during 
creation of the new table, and use a transaction to drop the original table and 
rename the new one.  That way users will be querying the original table right 
up until the switch over, when they will start using the new table.  

Foreign keys and other constraints may complicate things a bit, so check those 
out first.  Also, don't forget to index and analyze the new table before the 
switch over.

Good luck!

Bob Lunney




-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] PG Server Crash

2011-03-07 Thread Bob Lunney
OK, I got a nasty surprise today.  The server threw everybody out of the pool 
and logged this:

10.224.12.4(52763) idle: pthread_mutex_lock.c:80: __pthread_mutex_lock: 
Assertion `mutex-__data.__owner == 0' failed.
:4d45a006.78ec:2011-03-07 11:57:45.316 EST:LOG:  server process (PID 21298) was 
terminated by signal 6: Aborted
:4d45a006.78ec:2011-03-07 11:57:45.316 EST:LOG:  terminating any other active 
server processes

It then stopped all the backends, restarted, recovered and worked fine for the 
rest of the day.

The particulars:

SLES 10
PG 8.4.1
8-way Intel Xeon E5345 @ 2.33GHz
32 GB RAM

max_connections = 1500
shared_buffers = 1536MB
work_mem = 32MB 
maintenance_work_mem = 256MB
checkpoint_segments = 2048
effective_cache_size = 20GB
effective_io_concurrency = 6
random_page_cost = 2.0


This server has been rock solid running 8.4.1 for over a year.  It has Slony 
replicating to another mirror image server that has not had any problems 
whatsoever.

Any ideas?

Thanks in advance for all your help, and especially for the amazing database 
software!

Regards,

Bob Lunney


  

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PG Server Crash

2011-03-07 Thread Bob Lunney
Sorry, I should have included the fact that PG was compiled from source on the 
same machine where it runs using gcc 4.1.0, and config.log has:

./configure --with-python --with-gssapi --enable-thread-safety

Bob Lunney

--- On Mon, 3/7/11, Bob Lunney bob_lun...@yahoo.com wrote:

 From: Bob Lunney bob_lun...@yahoo.com
 Subject: [ADMIN] PG Server Crash
 To: pgsql-admin@postgresql.org
 Date: Monday, March 7, 2011, 11:24 PM
 OK, I got a nasty surprise
 today.  The server threw everybody out of the pool and
 logged this:
 
 10.224.12.4(52763) idle: pthread_mutex_lock.c:80:
 __pthread_mutex_lock: Assertion `mutex-__data.__owner ==
 0' failed.
 :4d45a006.78ec:2011-03-07 11:57:45.316 EST:LOG: 
 server process (PID 21298) was terminated by signal 6:
 Aborted
 :4d45a006.78ec:2011-03-07 11:57:45.316 EST:LOG: 
 terminating any other active server processes
 
 It then stopped all the backends, restarted, recovered and
 worked fine for the rest of the day.
 
 The particulars:
 
 SLES 10
 PG 8.4.1
 8-way Intel Xeon E5345 @ 2.33GHz
 32 GB RAM
 
 max_connections = 1500
 shared_buffers = 1536MB
 work_mem = 32MB           
              
 maintenance_work_mem = 256MB       
     
 checkpoint_segments = 2048
 effective_cache_size = 20GB
 effective_io_concurrency = 6
 random_page_cost = 2.0
 
 
 This server has been rock solid running 8.4.1 for over a
 year.  It has Slony replicating to another mirror image
 server that has not had any problems whatsoever.
 
 Any ideas?
 
 Thanks in advance for all your help, and especially for the
 amazing database software!
 
 Regards,
 
 Bob Lunney
 
 
       
 
 -- 
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin
 




-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgres Backup Utility

2011-01-19 Thread Bob Lunney
Brad,
Google for SQL Power Architect, download it, and try a schema comparison.  
That might get you a ways down to road to what you want.
Bob Lunney

--- On Wed, 1/19/11, French, Martin fren...@cromwell.co.uk wrote:

From: French, Martin fren...@cromwell.co.uk
Subject: Re: [ADMIN] Postgres Backup Utility
To: Bradley Holbrook operations_brad...@servillian.ca
Cc: pgsql-admin@postgresql.org
Date: Wednesday, January 19, 2011, 2:12 AM




 
 






 



Ok, you say that you cannot drop
and recreate, so you need to do this via alter statements only? That’s
obviously going to complicate matters, as a straight dump, drop, recreate,
restore would be the fastest and by far simplest method. 

   

So, Ideally, you’ll need
to do a table def comparison over the two databases, and generate the necessary
sql to amend the tables in test accordingly?  

   

Querying the pg_catalog/information_schema
over the two db’s should give you the table ddl from which you can diff,
and then generate the alter statements from the results.  

   

Cheers  

   

Martin 

   





From: Bradley Holbrook
[mailto:operations_brad...@servillian.ca] 

Sent: 18 January 2011 16:57

To: French, Martin

Cc: pgsql-admin@postgresql.org

Subject: RE: [ADMIN] Postgres Backup Utility 





   

Well,
I can’t just go dropping and recreating tables… it needs to create
the correct alter statements if existing tables and or functions already exist. 

   

Secondly,
when I’m finished changing the structure, I need to be able to select the
list of tables that will have content updates. 

   

Using
a script might be more work maintaining then it’s worth. I have a backup
utility that can do the job, but 3 tedious steps per schema, that only work
about 10% of the time (and no batching options so that I can create a list of
actions and run the list). 

   

   





From: French, Martin
[mailto:fren...@cromwell.co.uk] 

Sent: January-18-11 5:47 AM

To: Bradley Holbrook; pgsql-admin@postgresql.org

Subject: RE: [ADMIN] Postgres Backup Utility 





   

I’m assuming that this
needs to be tightly controlled and as such a replication tool is out of the
question? 

   

In that case; The first thing to
pop into my head here would be to use either use shell scripting, or to use the
pg API and write a c program to handle it. 

   

I remember doing something very
similar with Oracle a few years back. 

   

Cheers 

   

Martin 

   

   





From: pgsql-admin-ow...@postgresql.org
[mailto:pgsql-admin-ow...@postgresql.org] On Behalf Of Bradley Holbrook

Sent: 18 January 2011 00:08

To: pgsql-admin@postgresql.org

Subject: [ADMIN] Postgres Backup Utility 





   

Hello! 

   

First
day on the new mailing list as I have need of some expert’s advice. 

   

I
need to be able to quickly apply the structure updates from a development
database to a testing database, and do selective data updates (like on lookup
tables, but not content tables). 

   

Any
help would be appreciated! 

   

Brad 



___ 



This email is intended for the named recipient. The information contained 

in it is confidential. You should not copy it for any purposes, nor 

disclose its contents to any other party. If you received this email 

in error, please notify the sender immediately via email, and delete 

it from your computer. 



Any views or opinions presented are solely those of the author and do not 

necessarily represent those of the company. 



PCI Compliancy: Please note, we do not send or wish to receive banking, 

credit or debit card information by email or any other form of 

communication. 



Cromwell Tools Limited, PO Box 14, 65 Chartwell Drive 

Wigston, Leicester LE18 1AT. Tel 0116 2888000 

Registered in England and Wales, Reg No 00986161 

VAT GB 115 5713 87 900 

__  



 


___ 

  

This email is intended for the named recipient. The information contained 

in it is confidential.  You should not copy it for any purposes, nor 

disclose its contents to any other party.  If you received this email 

in error, please notify the sender immediately via email, and delete

it from your computer. 

  

Any views or opinions presented are solely those of the author and do not 

necessarily represent those of the company. 

  

PCI Compliancy: Please note, we do not send or wish to receive banking,

credit or debit card information by email or any other form of  

communication. 

   

Cromwell Tools Limited, PO Box 14, 65 Chartwell Drive

Wigston, Leicester LE18 1AT. Tel 0116 2888000

Registered in England and Wales, Reg No 00986161

VAT GB 115 5713 87 900

__




  

Re: [ADMIN] ERROR: invalid memory alloc request size 4294967293

2011-01-04 Thread Bob Lunney
Run ulimit -a and verify the max memory size allowed for the postgres 
account.(I assume you are running postmaster under the postgres account, 
right?)  The allowed size should be large enough for the postmaster plus shared 
buffers and several other GUCs that require memory.  

Bob Lunney


--- On Tue, 1/4/11, Victor Hugo dos Santos listas@gmail.com wrote:

 From: Victor Hugo dos Santos listas@gmail.com
 Subject: [ADMIN] ERROR: invalid memory alloc request size 4294967293
 To: pgsql-admin@postgresql.org
 Date: Tuesday, January 4, 2011, 8:48 AM
 Hello,
 
 Actually I use postgresql version  8.4.6-0ubuntu10.04
 in bacula server
 to save information about backups from bacula.
 But, 2 days ago, the postgresql make a error when I run the
 command
 pg_dump. This is the error:
 
 
 02-Jan 06:32 bacula-dir JobId 31005: BeforeJob: pg_dump:
 SQL command failed
 02-Jan 06:32 bacula-dir JobId 31005: BeforeJob: pg_dump:
 Error message
 from server: ERROR:  invalid memory alloc request size
 4294967293
 02-Jan 06:32 bacula-dir JobId 31005:%2




-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] ERROR: invalid memory alloc request size 4294967293

2011-01-04 Thread Bob Lunney
Run ulimit -a and verify the max memory size allowed for the postgres 
account.(I assume you are running postmaster under the postgres account, 
right?)  The allowed size should be large enough for the postmaster plus shared 
buffers and several other GUCs that require memory.  

Bob Lunney


--- On Tue, 1/4/11, Victor Hugo dos Santos listas@gmail.com wrote:

 From: Victor Hugo dos Santos listas@gmail.com
 Subject: [ADMIN] ERROR: invalid memory alloc request size 4294967293
 To: pgsql-admin@postgresql.org
 Date: Tuesday, January 4, 2011, 8:48 AM
 Hello,
 
 Actually I use postgresql version  8.4.6-0ubuntu10.04
 in bacula server
 to save information about backups from bacula.
 But, 2 days ago, the postgresql make a error when I run the
 command
 pg_dump. This is the error:
 
 
 02-Jan 06:32 bacula-dir JobId 31005: BeforeJob: pg_dump:
 SQL command failed
 02-Jan 06:32 bacula-dir JobId 31005: BeforeJob: pg_dump:
 Error message
 from server: ERROR:  invalid memory alloc request size
 4294967293
 02-Jan 06:32 bacula-dir JobId 31005:%2




-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] ERROR: invalid memory alloc request size 4294967293

2011-01-04 Thread Bob Lunney
Run ulimit -a and verify the max memory size allowed for the postgres 
account.(I assume you are running postmaster under the postgres account, 
right?)  The allowed size should be large enough for the postmaster plus shared 
buffers and several other GUCs that require memory.  

Bob Lunney


--- On Tue, 1/4/11, Victor Hugo dos Santos listas@gmail.com wrote:

 From: Victor Hugo dos Santos listas@gmail.com
 Subject: [ADMIN] ERROR: invalid memory alloc request size 4294967293
 To: pgsql-admin@postgresql.org
 Date: Tuesday, January 4, 2011, 8:48 AM
 Hello,
 
 Actually I use postgresql version  8.4.6-0ubuntu10.04
 in bacula server
 to save information about backups from bacula.
 But, 2 days ago, the postgresql make a error when I run the
 command
 pg_dump. This is the error:
 
 
 02-Jan 06:32 bacula-dir JobId 31005: BeforeJob: pg_dump:
 SQL command failed
 02-Jan 06:32 bacula-dir JobId 31005: BeforeJob: pg_dump:
 Error message
 from server: ERROR:  invalid memory alloc request size
 4294967293
 02-Jan 06:32 bacula-dir JobId 31005:%2




-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] pg_dump/restore problems

2010-10-26 Thread Bob Lunney
Glen,
Did you drop the indexes prior to the restore?  If not, try doing so and 
recreating the indexes afterwards.  That will also speed up the data load.
Bob Lunney

--- On Mon, 2/15/10, Glen Brown gkbrow...@gmail.com wrote:

From: Glen Brown gkbrow...@gmail.com
Subject: [ADMIN] pg_dump/restore problems
To: pgsql-admin@postgresql.org
Date: Monday, February 15, 2010, 1:25 PM

I am not sure where I should post this but I am running into problems trying to 
restore a large table. I am running 8.4.1 on all servers. The table is about 
25gb in size and most of that is toasted. It has about 2.5m records. When I 
dump this table using pg_dump -Fc it creates a 15 gb file. I am trying to 
restore in into a database that has 100gb of free disk space and it consumes it 
all and fails to finish the restore. The table is not partitioned and has a few 
indexes on it. What can I do?

 
thanks
-glen

Glen Brown




  

[ADMIN] GSS for both Windows and Linux

2010-10-01 Thread Bob Lunney
Is it possible to have clients authenticate via Kerberos to a PG 8.4 server 
running on linux from both linux and windows hosts?  I have authentication 
working (using GSSAPI) between the linux clients and the server.  I'd like to 
use the Windows binary install (which already has GSSAPI compiled in) against 
the linux-based database server.  

o  Do I need to have separate AD principals for the Windows clients or is it 
possible for Windows clients to get a ticket granting ticket from the 
linux-based KDC?

o  How do I combine the linux and Windows keytab files the server needs to 
authenticate if separate principals are required?

o  Since Windows (mis)uses POSTGRES and linux uses postgres for the service 
name, do I need to force either one to use the other's service name, since the 
database server can only use one version? 

Any pointers will be appreciated.  I've scoured Google  but can't find the 
missing piece of information I need to get Windows clients working.  Thanks in 
advance!

Bob Lunney


  

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] out of memory error

2010-08-05 Thread Bob Lunney
Silvio , 

I had a similar problem when starting the database from an account that didn't 
have the appropriate ulimits set.  Check the ulimit values using ulimit -a.

HTH,

Bob Lunney

--- On Thu, 8/5/10, Silvio Brandani silvio.brand...@tech.sdb.it wrote:

 From: Silvio Brandani silvio.brand...@tech.sdb.it
 Subject: [ADMIN] out of memory error
 To: pgsql-admin@postgresql.org
 Date: Thursday, August 5, 2010, 9:01 AM
 Hi,
 
 a query on our production database give following errror:
 
 
 2010-08-05 10:52:40 CEST [12106]: [278-1] ERROR:  out
 of memory
 2010-08-05 10:52:40 CEST [12106]: [279-1] DETAIL: 
 Failed on request of size 48.
 
 
 
 
 any suggestion ?
 
 -- Silvio Brandani
 Infrastructure Administrator
 SDB Information Technology
 Phone: +39.055.3811222
 Fax:   +39.055.5201119
 
 ---
 
 
 
 
 
 
 Utilizziamo i dati personali che la riguardano
 esclusivamente per nostre finalità amministrative e
 contabili, anche quando li comunichiamo a terzi.
 Informazioni dettagliate, anche in ordine al Suo diritto di
 accesso e agli altri Suoi diritti, sono riportate alla
 pagina http://www.savinodelbene.com/news/privacy.html
 Se avete ricevuto questo messaggio per errore Vi preghiamo
 di ritornarlo al mittente eliminandolo assieme agli
 eventuali allegati, ai sensi art. 616 codice penale 
 http://www.savinodelbene.com/codice_penale_616.html
 L'Azienda non si assume alcuna responsabilità giuridica
 qualora pervengano da questo indirizzo messaggi estranei
 all'attività lavorativa o contrari a norme.
 --
 
 -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin
 




-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] password administration

2010-08-05 Thread Bob Lunney
Mark,

Look into kerberos.  You will have to recompile your server to use it.

Bob Lunney

--- On Thu, 8/5/10, Mark Steben mste...@autorevenue.com wrote:

 From: Mark Steben mste...@autorevenue.com
 Subject: [ADMIN] password administration
 To: pgsql-admin@postgresql.org
 Date: Thursday, August 5, 2010, 3:58 PM
 
 
 Hi postgres gurus:
 
 I would like to set up a facility that enforces password
 changes for roles
 After a predefined period (30 days for instance) when
 logging into psql
 Or, at the very least, send an email out to notify that
 your current 
 Password period is about to expire.  Preferably, I'd
 like to use
 The 'rolvaliduntil' column in pg_roles.
 
 I'm wondering if there is an app inside or outside of
 postgres
 that I can use or do I have to design from scratch.
 
 Thanks for your time,
 
 
 Mark Steben | Database Administrator 
 @utoRevenue® - Keeping Customers Close 
 95D Ashley Ave, West Springfield, MA 01089 
 413.243.4800 x1512 (Phone) |413.732-1824 (Fax) 
 @utoRevenue is a registered trademark and a division of
 Dominion
 Enterprises 
  
 
 
 
 
 -- 
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin
 




-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Slony DDL/DML Change and attempted to lock invisible tuple PG Error

2010-06-25 Thread Bob Lunney
I'm not sure if this is really a bug, so I'm posting here instead of pgsql-bugs.

Last night I attempted to make a DDL and DML change through Slony, using the 
execute script command.  The changes (adding a column and updating some rows) 
worked on the master but failed on the slave, with the PG database on the slave 
throwing an attempted to lock invisible tuple error.  Neither DDL or DML 
change stuck on the slave, but did on the master.

Slony, of course, restarted its worker thread and tried again (and again and 
again...).  I stopped the retries by updating the sl_event entry on the master 
node, changing it to a SYNC record, dropping the table in question from the 
replication set, applying the DDL/DML manually on the slave, then re-adding the 
table to the replication set.  (All of which worked fine, BTW.)

So, the question is: Is this a Slony or a PostgreSQL problem, and what 
should/can I do about it?

I'm running SLES 10 (Linux slave1 2.6.16.60-0.33-smp #1 SMP Fri Oct 31 14:24:07 
UTC 2008 x86_64 x86_64 x86_64 GNU/Linux), PostgreSQL 8.4.1 (PostgreSQL 8.4.1 on 
x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.0 (SUSE Linux), 
64-bit), and Slony 2.0.3-rc2.  This setup has been running fine for 5 months 
under very heavy daily load.  (BTW, Slony 2.0.3-rc2 has been working great 
replicating data and servicing DDL requests just fine, with no problems up to 
now, but I'm still going to test 2.0.4 and upgrade if the test results pan out).


The Slony log contains:

2010-06-24 18:06:09 EDT CONFIG remoteWorkerThread_1: DDL request with 7 
statements
2010-06-24 18:06:09 EDT CONFIG remoteWorkerThread_1: DDL Statement 0: [

-- -*- SQL -*-

set session authorization main_usr;]
2010-06-24 18:06:09 EDT CONFIG DDL success - PGRES_COMMAND_OK
2010-06-24 18:06:09 EDT CONFIG remoteWorkerThread_1: DDL Statement 1: [
set search_path to public;]
2010-06-24 18:06:09 EDT CONFIG DDL success - PGRES_COMMAND_OK
2010-06-24 18:06:09 EDT CONFIG remoteWorkerThread_1: DDL Statement 2: [


alter table public.rte add column dols boolean default false not null;]
2010-06-24 18:06:09 EDT CONFIG DDL success - PGRES_COMMAND_OK
2010-06-24 18:06:09 EDT CONFIG remoteWorkerThread_1: DDL Statement 3: [
update public.rte set dols = true where mctr in ('AA', 'YY');]
2010-06-24 18:06:09 EDT ERROR  DDL Statement failed - PGRES_FATAL_ERROR
2010-06-24 18:06:09 EDT INFO   slon: retry requested
2010-06-24 18:06:09 EDT INFO   slon: notify worker process to shutdown


The relevant PG log entries are:

10.192.2.1(41547):2010-06-24 18:06:09.913 EDT:LOG:  statement:


alter table public.rte add column dols boolean default false not null;
10.192.2.1(41547):2010-06-24 18:06:09.935 EDT:ERROR:  attempted to lock 
invisible tuple
10.192.2.1(41547):2010-06-24 18:06:09.935 EDT:STATEMENT:
update public.rte set dols = true where mctr in ('AA', 'YY');
10.192.2.1(41547):2010-06-24 18:06:09.958 EDT:LOG:  unexpected EOF on client 
connection
10.192.2.1(41547):2010-06-24 18:06:09.958 EDT:LOG:  disconnection: session 
time: 0:00:00.071 user=rep_usr database=main_db host=10.192.2.1 port=41547

Thanks in advance for your help!

Regards,

Bob Lunney



  

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] alter column resize triggers question

2010-06-21 Thread Bob Lunney
Mike,
Doesn't look like it, at least on 8.4.  Give the script below a try for 
yourself.
Another approach would be to create a new table with the schema you need, 
insert the rows from your existing table into it, rename the two tables 
appropriately, then recreate the indexes and trigger on the new table.  That 
way you won't have to worry about the trigger firing at all.
Bob Lunney
==
create table blah (blah int, ts timestamptz);                                   
                                  
create function update_timestamp() returns trigger as $$begin  new.ts = 
now();  return new;end;$$ language plpgsql;
create trigger blah_tbefore insert or update on blah for each row execute 
procedure update_timestamp();
insert into blah values (1);insert into blah values (2);insert into blah values 
(3);select * from blah;
 blah |              ts--+---    1 | 2010-06-21 
14:33:32.14576-04    2 | 2010-06-21 14:33:34.545739-04    3 | 2010-06-21 
14:33:36.097878-04(3 rows)
alter table blah alter column blah type bigint;select * from blah;
 blah |              ts--+---    1 | 2010-06-21 
14:33:32.14576-04    2 | 2010-06-21 14:33:34.545739-04    3 | 2010-06-21 
14:33:36.097878-04(3 rows)
===
--- On Mon, 6/21/10, Mike Broers mbro...@gmail.com wrote:
From: Mike Broers mbro...@gmail.com
Subject: [ADMIN] alter column resize triggers question
To: pgsql-admin@postgresql.org
Date: Monday, June 21, 2010, 2:18 PM

Pg v8.3.8

I have a table whose column size needs to be increased:

 \d dim_product
  Table report.dim_product
    Column    |   Type   
|  
Modifiers   

--+--+--
 product_id | integer  | not null default 
nextval('dim_product_id_seq'::regclass)

 application_id   | integer  | not null
 source_product_cd    | integer  | not null
 product_type | character varying(20)    | not null
 product_name | character varying(100)   | not null

 vendor_offer_cd  | character varying(30)    | 
 service_name | character varying(20)    | 
 category | character varying(40)    | 
 svc_line_cd  | character varying(40)    | 
 established  | timestamp with time zone | not null

 modified | timestamp with time zone | not null
Indexes:
    dim_product_pkey PRIMARY KEY, btree (product_id)
    idx_dim_product_modified btree (modified)
    idx_dim_product_source_product_cd btree (source_product_cd)

Triggers:
    t_dim_product_timestamp_b_iu BEFORE INSERT OR UPDATE ON dim_product FOR 
EACH ROW EXECUTE PROCEDURE public.update_timestamps()


I need to change service_name column to varchar(55), my plan was to backup the 
table with pg_dump, then run the below alter statement:


alter table dim_product alter column service_name type varchar(55);

But i am worried about the triggers because I believe that the alter table 
statement will rewrite the table and I dont want those triggers firing.  Does 
anyone know if I need to disable these triggers prior to the alter table 
statement, or if there are any other dependencies or precautions I should 
review before attempting this action?  I have also seen there is a workaround 
with running updates to the pg_attribute table but frankly that makes me a 
little nervous.


Thanks in advance,
Mike







  

Re: [ADMIN] Runaway Locks

2010-05-04 Thread Bob Lunney
If you're using Tomcat or some other Java container that does connection 
management restart it and the lock should go away.

Bob Lunney

--- On Fri, 4/30/10, Kamcheung Sham cs...@computer.org wrote:

 From: Kamcheung Sham cs...@computer.org
 Subject: [ADMIN] Runaway Locks
 To: pgsql-admin@postgresql.org
 Date: Friday, April 30, 2010, 9:03 AM
 
 I was connecting to Postgres 8.3 through JDBC. During my
 unit tests, something when wrong and now leaving with the
 following locks in the db:
 
 arc_dev=# select locktype, mode, relname,
 virtualtransaction, pid from pg_locks l join pg_class c on
 l.relation = c.oid;           
                
                 
 locktype |       mode   
    |         
 relname           |
 virtualtransaction | pid  
 --+--+++--
 
 relation | AccessShareLock  | pg_locks   
            
    | 1/38         
      | 1816 
 relation | RowShareLock     |
 hibernate_sequences        |
 -1/2091555         | 
     
 relation | RowExclusiveLock | hibernate_sequences 
       | -1/2091555     
    |      
 relation | AccessShareLock  | pg_class_oid_index 
        | 1/38     
          | 1816 
 relation | AccessShareLock  |
 pg_class_relname_nsp_index | 1/38       
        | 1816 
 relation | AccessShareLock  | pg_class   
            
    | 1/38         
      | 1816
 (6 rows)arc_dev=# 
 
 The locks on 'hibernate_sequences' is causing any update to
 the locked row to hang. There is currently no running
 database client process anymore (as I've restarted by server
 a few times).
 
 My question is how do i kill the virtual transaction and
 have the locks released?
 
 Thanks,
 kam
 -- 
 Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin
 




-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Handling of images via Postgressql

2010-04-07 Thread Bob Lunney
Suresh,
The real question is:  does manipulation of the images have to be 
transactional?  If so, store them in the database.  If not, store the images in 
the file system and put the file name in the database, thereby avoiding 
unecessary WAL overhead.
Bob Lunney

--- On Wed, 4/7/10, Suresh Borse s.bo...@direction.biz wrote:

From: Suresh Borse s.bo...@direction.biz
Subject: [ADMIN] Handling of images via Postgressql
To: pgsql-admin@postgresql.org
Date: Wednesday, April 7, 2010, 8:50 AM




  
  

 




How does PostGreSQL perform in case we have to store and use huge no of images:



 Eg 4000 – 1 images, each approx 2 MB size.



 



The Questions that we have are:



How do we handle such huge no of images so that the application does not slow 
down?



How does PostGreSQL use caching? In case of images does it cache?



Do you suggest having the images in a different database by itself?







Regards,



Suresh
 




  

Re: [ADMIN] Querying the same column and table across schemas

2010-03-05 Thread Bob Lunney


--- On Fri, 3/5/10, John A. Sullivan III jsulli...@opensourcedevel.com wrote:

 From: John A. Sullivan III jsulli...@opensourcedevel.com
 Subject: [ADMIN] Querying the same column and table across schemas
 To: pgsql-admin@postgresql.org
 Date: Friday, March 5, 2010, 2:44 PM
 Hello, all.  I'm working on a
 project using the X2Go terminal server
 project (www.x2go.org).  They record session data in a
 postgresql
 database.  Our environment is a little more secure
 than typical and we
 do not want it possible for one user to see another's
 session data.  We
 thus have divided the session database into schemas each
 with an
 identical set of tables.  Each user only writes and
 reads from their
 schema.
 
 However, we need to query all schemas as if they were
 one.  Is there a
 way to do that?
 
 In other words, if we were a single schema database, we
 could do
 
 select session_id from sessions;
 
 to list all sessions.  How can we accomplish the same
 thing to list all
 the sessions across all the schemas in a single query?
 
 I'm trying to avoid making a thousand call like
 
 select user1.session_id from user1.sessions;
 
 when I could do it in a single query especially since the
 database is
 remote and secured with SSL.
 
 Thanks - John
 

John,

How about creating a central admin schema and putting a trigger on all the 
sessions tables to write changes to the central admin schema's session table?  
The function could belong to the admin role and run with definer's security.

Bob




-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] AIX - Out of Memory

2010-02-15 Thread Bob Lunney


--- On Mon, 2/15/10, Kenneth Marshall k...@rice.edu wrote:

 From: Kenneth Marshall k...@rice.edu
 Subject: Re: [ADMIN] AIX - Out of Memory
 To: Tom Lane t...@sss.pgh.pa.us
 Cc: Thorne, Francis thor...@cromwell.co.uk, pgsql-admin@postgresql.org
 Date: Monday, February 15, 2010, 11:18 AM
 On Mon, Feb 15, 2010 at 10:57:06AM
 -0500, Tom Lane wrote:
  Thorne, Francis thor...@cromwell.co.uk
 writes:
   Looking for some help with regards to an 'Out of
 Memory' issue I have
   with our Postgresql install on AIX.  When
 running large updates or
   select queries we get an out of memory error
 returned and details
   entered in the log file like below.  This is
 a 64-bit install and I have
   set the ulimit for the postgres user to
 unlimited.  
  
  The bloat seems to be here:
  
       AfterTriggerEvents:
 131063808 total in 26 blocks; 576 free (7
   chunks); 131063232 used
  
  but it's hard to believe you'd be getting out of
 memory after only
  130MB in a 64-bit build.  Are you *sure* the
 postgres executable is
  64-bit?  Are you *sure* the postmaster has been
 launched with
  nonrestrictive ulimit?  On lots of setups that
 takes modifying the
  PG startup script, not just fooling with some user's
 .profile.
  
   This is a 64-bit install (8.3) on AIX 5.3
  
  8.3.what?
  
         
     regards, tom lane
 
 I no longer have an AIX box, but I had similar problems
 with other
 applications that needed large amounts of memory. Some OS
 specific
 steps needed to be taken to allow normal users to allocate
 large
 blocks of memory. The information needed was in their
 on-line docs
 as I recall, but I do not remember the details. The
 executables may
 need to be built with specific options/flags to work.
 
 Regards,
 Ken
 

Ken,

I recently saw a similar issue.  It is two-fold:

1.  I used su - to become the postgres user, and inherited the previous 
account's memory limits,
2.  AfterTriggerEvents queues are caused by foreign key constraints, one per 
row.  If you're loading data, dropping or disabling that constraint makes a 
world of difference.  Just be sure to check afterwards if the RI has been 
violated prior to recreating the FK constraint.

Bob




-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Large Number of Files in pg_xlog

2009-11-30 Thread Bob Lunney
Is it possible to reduce the number of files in pg_xlog.  Currently, 
checkpoint_segments is set to 512, which if I understand correctly, means there 
could be up to 1025 files in pg_xlog, 16Gb of WAL.  Right now there are 833 
files.  I've tried setting checkpoint_segments to a lower number and restarting 
the database, but to no avail.  Is there another method that doesn't involve 
running a dump, initdb, and restore?

Thanks in advance,

Bob Lunney


  

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PostgreSQL Database freezes during backup then generates drwtsn.exe process

2008-12-02 Thread Bob Lunney
Ward,I've experienced the exact problem you describe. The two machines where identical in every way: make, model, disk layout, OS, etc., and this scenario happens regardless of which machine was the primary and which was the warm-standby. Note I was not running pgAgent.I was using pg_standby to implement copying of WAL files between machines. It would copy the WAL file to a network shared directory, where the warm-standby would pick up the file and use it, until the fatal error you describe happened.I had discovered that during a copy operation Windows will allocate the entire file size on the target prior to completing the file copy. This differs from Unix, and may have something to do with the errors we are seeing. I'm speculating here, but I believe when the recovery code
 "sees" a 16 Mb file it thinks the entire file contents are available, which is not necessarily the case with Windows.I know some folks recommend rsync, but that requires installing cygwin and my client isn't happy with that idea. Possibly copying the WAL file to a temporary location, then moving it to the target location may mitigate the problem, since move operations (on the same disk drive, anyway) in Windows simpy rejigger the file descriptor and don't reallocate any disk space. I haven't tried it yet, but I'm moving in that direction.Regards,Bob Lunney--- On Tue, 12/2/08, Ward Eaton [EMAIL PROTECTED] wrote:From: Ward Eaton [EMAIL PROTECTED]Subject: [ADMIN] PostgreSQL Database freezes during backup then generates drwtsn.exe processTo:
 pgsql-admin@postgresql.orgDate: Tuesday, December 2, 2008, 9:01 AM


 
 

 
 





I’m running PostgreSQL 8.3 on a Windows
 2003 machine (pgAgent service is also installed and
running). The system is running as a redundant database server. Thus, backing
up from one machine to another, and transferring over WAL logs. In the postageSQL
log files there are several error messages that read: 

‘could not rename file, no such file or directory’
 

  

Eventually, postgreSQL ‘freezes’, I’m
unable to access the database, and there is no logging. I find a drwtsn.exe
process running under the postgres account in Task Manager. If I kill the
drwtsn PostgreSQL service will stop. I can then restart it and the database
will run as expected for a few days.  

  

Has anyone encountered this problem before? If so, were you
able to find a solution? 

  

Thanks for any help in advance. 

  


Ward Eaton
Project
Engineer Automation

RAD-CON Inc.
TECHNOLOGY: Innovative 
Proven
Office:

 +1.216.706.8927
Fax:

 +1.216.221.1135
Website: www.RAD-CON.com
E-mail: [EMAIL PROTECTED] 

  



 





  

[ADMIN] PITR wrm-standby startup fails

2008-08-24 Thread Bob Lunney
I'm trying to vet the PITR/warm-standby process so set up a primary and 
secondary server on two different Windows machines.  (Yeah, I know, but I don't 
have a choice in the matter.)

The primary works fine and copies its WAL files to the archive directory.  As 
long as the secondary is in recovery mode life is good.  When I introduce the 
trigger file, however, to kick the secondary out of recovery mode it bombs, 
complaining that its looking for the next WAL file (which is never going to 
arrive, especially not in the local pg_xlog directory), then gives me an 
invalid parameter error.  The relevent part of the secondary's log file is 
here:

2008-08-24 23:02:56 CDT LOG:  restored log file 000100040088 from 
archive
2008-08-24 23:03:01 CDT LOG:  restored log file 000100040089 from 
archive
2008-08-24 23:03:06 CDT LOG:  restored log file 00010004008A from 
archive
2008-08-24 23:07:02 CDT LOG:  could not open file 
pg_xlog/00010004008B (log file 4, segment 139): No such file or 
directory
2008-08-24 23:07:05 CDT LOG:  startup process (PID 1468) was terminated by 
exception 0xC00D
2008-08-24 23:07:05 CDT HINT:  See C include file ntstatus.h for a 
description of the hexadecimal value.
2008-08-24 23:07:05 CDT LOG:  aborting startup due to startup process failure

My recovery command is:

restore_command = '..\bin\pg_standby -d -s 5 -t .\pgsql.trigger.5442 
..\data\archive %f %p %r 2pg_log\standby.log'


I found this on the forums.  Could it be the source of the problem I'm 
experiencing?

Thanks in advance for the wizardly advice!

Bob Lunney



  

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin