Re: [GENERAL] Upgrading postgresql minor version

2017-05-10 Thread Alan Hodgson
On Wednesday 10 May 2017 17:13:50 Ron Ben wrote:
> Not possible 
> https://www.postgresql.org/download/linux/debian/
>  
> To upgrade I do: apt-get install postgresql-9.3
> There is no way to "roll back" from here.
> I can not choose which version to install, it install the latest version
> packed for debian. Currently its 9.3.16 
> The docuntation says that upgrading minor version is just "replacing the
> executables", so there has to be a way to save them on the side for roll
> back. If not, the documntation is insuffecent. We all know that sometimes
> even the smallest change can cause troubles.
> 

Download the old packages from the repository. You can use dpkg to manually 
downgrade them if needed.  You can also use dpkg to get a list of the files in 
any particular package if you want to save them manually.

Or you should ask your system administrator for help.

This is not a PostgreSQL issue and that's why it (and many other things) are 
not in the docs. Every version of Linux uses a different package manager. The 
system administrator needs to know how to use it.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Caching and Blobs in PG? Was: Can PG replace redis, amqp, s3 in the future?

2017-05-04 Thread Alan Hodgson
On Thursday 04 May 2017 14:47:54 John R Pierce wrote:
> On 5/4/2017 2:28 PM, Alan Hodgson wrote:
> > On Thursday 04 May 2017 14:21:00 John R Pierce wrote:
> >> or EBS, and I've heard from more than a few people that EBS can be
> >> something of a sand trap.
> > 
> > Sorry for following up off-topic, but EBS has actually improved
> > considerably in the last few years. You can get guaranteed (and very
> > high) IOPS on SSD storage, and many instance types come with high-speed
> > throughput to EBS. It's much much better for databases than it was 5
> > years ago.
> 
> has it become more stable when Amazon has their occasional major hiccups?

No ... when they have a major problem it generally cascades across the region, 
and several outages have hit EBS in particular. In us-east-1, at least. Other 
regions seem more reliable.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Caching and Blobs in PG? Was: Can PG replace redis, amqp, s3 in the future?

2017-05-04 Thread Alan Hodgson
On Thursday 04 May 2017 14:21:00 John R Pierce wrote:
> or EBS, and I've heard from more than a few people that EBS can be
> something of a sand trap.
> 

Sorry for following up off-topic, but EBS has actually improved considerably 
in the last few years. You can get guaranteed (and very high) IOPS on SSD 
storage, and many instance types come with high-speed throughput to EBS. It's 
much much better for databases than it was 5 years ago.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recover PostgreSQL database folder data

2017-04-21 Thread Alan Hodgson
> On Fri, Apr 21, 2017 at 12:40 PM, Edson Lidorio 
> 
> wrote:
> > Hi,
> > There was a disaster in my development note. I was able to recover the
> > data folder. PostgreSQL 9.6.2, was installed in Centos 7.
> > 
> > Here are the procedures I'm trying to initialize Postgresql for me to do a
> > backup.
> > 
> > 1- I installed PostgreSQL 9.6.2 on a VM with Centos 7.
> > 2- I stopped the PostgreSQL service: sudo systemctl stop postgresql-9.6
> > 3- I renamed the /var/lib/pgsql/9.6/data folder to date data_old and
> > copied the old date folder
> > 
> > 4- I gave permission in the folder date:
> >  sudo chown postgres: postgres /var/lib/pgsql/9.6/data;

Ensure you actually have a folder named "/var/lib/pgsql/9.6/data" with your 
database in it (not, possibly, a folder named "date" per your original 
message).

as root:

chown -R postgres:postgres /var/lib/pgsql/9.6/data
restorecon -R /var/lib/pgsql/9.6/data

If it still won't start after that, please do an "ls -alRZ 
/var/lib/pgsql/9.6", pastebin the output, and send the link.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "Database does not exist" weirdness

2017-02-17 Thread Alan Hodgson
On Friday 17 February 2017 18:18:20 Michael Tyson wrote:
> postgres=# \q
> pi@raspi ~ $ sudo -u postgres psql testdb
> psql: FATAL:  database "testdb" does not exist
> pi@raspi ~ $ sudo -u postgres createdb testdb
> createdb: database creation failed: ERROR:  duplicate key value violates
> unique constraint "pg_database_datname_index" DETAIL:  Key
> (datname)=(testdb) already exists.
> 

Something's seriously borked in your PostgreSQL data files. The message it 
should give you if the database exists is:

createdb: database creation failed: ERROR:  database "testdb" already exists

It looks like it partially exists in the catalog but not really. I can't guess 
how you got to such a state, but you should probably nuke your data directory 
and start over with a fresh initdb.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] streaming replication and WAL

2016-10-25 Thread Alan Hodgson
On Tuesday 25 October 2016 17:08:26 t.dalpo...@gmail.com wrote:
> Hi,
> let's suppose I have:
>   - a primary server with its own local archive location, configured for
> continuous archiving
>   - a standby server without archive.
> These servers are configured for Sync streaming replication .
> Let's suppose that the standby stays down for a long time, then it
> restarts, goes into catchup mode and now needs some old WALs from the
> server archive location.
> Will the standby be able to automatically drain those files through the
> replication or only the WALs being currently updated by the primary ?
> 

It would need its own direct access to the master's archive.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help on recovering my standby

2016-06-22 Thread Alan Hodgson
On Tuesday 21 June 2016 19:34:18 Ramalingam, Sankarakumar wrote:
> Hi I have my standby (streaming replication) down due to missing wal files.
> You would see the same error in the logs stating "cannot find the wal file
> ..." What is the best way to get it going so that when we switch between
> standby and primary once in a while they are in sync?
> 
> Currently I am working on a CERT server and hence there is no outage
> concerns. I need to repeat the same process on prod once I get it going
> successfully. Any help is appreciated.
> 

You should keep your WAL files from the master for at least as long as the 
slave might be offline (plus startup time), somewhere the slave can copy them 
from when needed (shared file system, object store, scp target, whatever).

See the postgresql.conf parameter archive_command and the corresponding  
recovery.conf parameter restore_command.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Checkpoint Err on Startup of Rsynced System

2016-05-31 Thread Alan Hodgson
On Tuesday, May 31, 2016 10:13:14 AM Jim Longwill wrote:
> I am trying to setup a 2nd, identical, db server (M2) for development
> and I've run into a problem with starting up the 2nd Postgres installation.
> 
> Here's what I've done:
>1) did a 'clone' of 1st (production) machine M1 (so both machines on
> Cent OS 7.2)
>2) setup an rsync operation, did a complete 'rsync' from M1 to M2
>3) did a final 'CHECKPOINT' command on M1 postgres
>4) shutdown postgres on M1 with 'pg_ctl stop'
>5) did final 'rsync' operation  (then restarted postgres on M1 with
> 'pg_ctl start')
>6) tried to startup postgres on M2
> 
> It won't start, & in the log file gives the error message:
> ...
> < 2016-05-31 09:02:52.337 PDT >LOG:  invalid primary checkpoint record
> < 2016-05-31 09:02:52.337 PDT >LOG:  invalid secondary checkpoint record
> < 2016-05-31 09:02:52.337 PDT >PANIC:  could not locate a valid
> checkpoint record
> < 2016-05-31 09:02:53.184 PDT >LOG:  startup process (PID 26680) was
> terminated by signal 6: Aborted
> < 2016-05-31 09:02:53.184 PDT >LOG:  aborting startup due to startup
> process failure
> 
> I've tried several times to do this but always get this result.  So, do
> I need to do a new 'initdb..' operation on machine M2 + restore from M1
> backups?  Or is there another way to fix this?

What you describe should work fine. In order of likelihood of why it doesnt, I 
could guess:

1 - you're not waiting for the database to fully shut down before running the 
last rsync
2 - you're not in fact rsync'ing the entire data directory
3 - the target server is running a different version of PostgreSQL or has a 
different machine architecture



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Migrate 2 DB's - v8.3

2016-05-27 Thread Alan Hodgson
On Friday, May 27, 2016 05:32:08 PM Melvin Davidson wrote:
> Well, Slony certainly will do the trick.
> Keep in mind you will need to do schema only first to the slave.
> You set up replication from the old server with the db on the new server as
> the slave. Then you initiate replication. It will probably take a long time
> to
> replicate, but then you have the option to promote the slave at your time
> preference (IE: your 2 hr window). It should only take a few minutes for
> Slony to do the switchover, but the best thing to do is a dry run first.
> IOW, you'll have to do the whole thing twice to get an accurate switch time,
> but you won't need to change your network until you are ready to go live.

Slony doesn't do BLOBs, afaik, unless he's using BYTEA fields.

Otherwise I believe dump/reload is OP's only choice. He should be able to do 
90GB in 2 hours on fast enough hardware; just pipe it over the network to do 
the restore simultaneous with the dump. 

Also remove as many indexes as possible beforehand and use create concurrently 
manually afterwards to add them back in.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Any difference between commit/rollback when only temp tables and \copy are used

2016-05-16 Thread Alan Hodgson
On Monday, May 16, 2016 03:41:23 PM David G. Johnston wrote:
> I have a psql script that obtains data via the \copy command and loads it
> into a temporary table.  Additional work is performed possibly generating
> additional temporary tables but never any "real" tables.  Then the script
> outputs, either to stdout or via \copy, the results.
> 
> Does it matter whether I issue a ROLLBACK or a COMMIT at the of the
> transaction?  More basically: does it matter whether I issue a BEGIN?
> 
> The script runs on Ubuntu inside a bash shell's heredoc.
> 

Some things will complete faster if you use BEGIN to start, as PostgreSQL will 
otherwise issue an implicit BEGIN and COMMIT before and after every statement.

If you don't need anything saved at the end it probably doesn't matter if you 
use ROLLBACK or COMMIT.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem after replication switchover

2016-04-06 Thread Alan Hodgson
On Wednesday, April 06, 2016 10:33:16 AM Lars Arvidson wrote:
> > I'd guess it's probably more like option 3 - Glusterfs ate my database.
> 
> Hi, thanks for your reply!
> We do archive logs on a distributed Glusterfs volume in case the streaming
> replication gets too far behind and the transaction logs have been removed.
> Would a restore of a corrupt archived log file give the symptoms we are
> seeing? Would not Postgresql detect that the logfile was corrupt? Are there
> some way I can analyze archived logs files to see if this is the problem?
> 

If it's just storing the logs, I doubt it's the cause of the problem. You can 
ignore my message. I had too much fun fighting with Gluster recently.

I reread your original full post, and the one thing that stuck out for me was 
"the clusters are now replicating from each other". I feel like that could be 
a problem. But someone more intimate with the replication might want to input 
on that.

Other than that, I wonder if you just have a hardware problem with your 
storage.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem after replication switchover

2016-04-05 Thread Alan Hodgson
On Tuesday, April 05, 2016 12:55:04 PM Lars Arvidson wrote:
> Is there something I missed in the switchover or could this be a bug?
> 

I'd guess it's probably more like option 3 - Glusterfs ate my database.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does PostgreSQL support to write glusterfs by Libgfapi

2015-12-16 Thread Alan Hodgson
On Tuesday, December 15, 2015 11:26:40 PM zh1029 wrote:
> Hi,
>   It seems low performance PostgreSQL(9.3.6) while writing data to glusterFS
> distributed file system. libgfapi is provide since GlusterFS version 3.4 to
> avoid kernel visits/data copy which can improve its performance. But I
> didn't find out any instruction from the PostgreSQL web page. Do you know
> if PostgreSQL later release supports using libgfapi to optimized write data
> to ClusterFS file system.
> 

Putting a database on Gluster is a horrible idea in any case. But no, 
PostgreSQL does not have Gluster-specific support.

I'm actually trying to think of a use case where it would make sense; I 
suppose a large data warehouse could theoretically see some sequential read 
improvements from a scale-out cluster file system. But you could only have one 
client node accessing it.

signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] HELP!!! The WAL Archive is taking up all space

2015-12-09 Thread Alan Hodgson
On Wednesday, December 09, 2015 07:55:09 AM FattahRozzaq wrote:
> archive_mode = on
> archive_command = 'cp -i %p /home/postgres/archive/master/%f'
> 
> 
> The WAL archive folder is at /home/postgres/archive/master/, right?
> This directory consumes around 750GB of Disk-1.
> Each segment in the /home/postgres/archive/master/ is 16MB each
> There are currently 47443 files in this folder.
> 
> If I want to limit the total size use by WAL archive to around 200-400
> GB, what value should I set for the wal_keep_segments,
> checkpoint_segments?

PostgreSQL doesn't clean up files copied by your archive_command. You need to 
have a separate task clean those out. PostgreSQL's active wal_keep_segments 
etc. are in the data/pg_xlog directory.


signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] Most effective settings for deleting lots of data?

2015-08-26 Thread Alan Hodgson
On Wednesday, August 26, 2015 08:25:02 PM Cory Tucker wrote:
 What settings would you recommend?  Also, it just occurred to me that I
 should try to disable/drop all indexes (especially since they will be
 recreated) later so that those are not updated in the process.

Don't drop the indexes your foreign keys use to find cascading deletes.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Storing Video's or vedio file in DB.

2014-12-17 Thread Alan Hodgson
On Wednesday, December 17, 2014 01:55:46 PM Thomas Kellerer wrote:
 Albe Laurenz schrieb am 17.12.2014 um 11:07:
  and the performance will be worse than reading files from the file system.
 
 There is a Microsoft research [1] (from 2006) which tested this myth using
 SQL Server. It showed that the database might actually be faster than the
 file system.
 
 As this topic comes up at my workplace every now and then as well, I created
 a little web application (Java/JDBC) to test this on Postgres and possibly
 other DBMS.
 
 Turns out the Postgres as well isn't really slower at this than the file
 system.
 
 For small files around 50k both perform similar: the average time to read
 the blob from a bytea column was around 2ms whereas the average time to
 read the blob from the filesystem was around 1ms. The test uses 50 threads
 to read the blobs using the PK of the table.
 
 Reading from the filesystem means looking up the path for the file in the
 database table and then reading the file from the filesystem.
 
 For larger files around 250k Postgres was actually faster in my tests: 130ms
 reading the bytea column vs. 260ms reading the file from disk.
 
 The tests were done locally on my Windows laptop.
 I didn't have time yet to do this on a Linux server. I expect the filesystem
 to have some impact on the figures and NTFS is not known for being
 blazingly fast. So maybe those figures will change.
 
 
 My tests however do not take into account the actual time it takes to send
 the binary data from the server to the client (=browser). It might well be
 possible that serving the file through an Apache Web Server directly is
 faster than serving the file through a JEE Servlet. My intention was to
 measure the raw read speed of the binary data from the medium where it is
 stored.

You can get the data from disk about as fast, but actually serving it results 
in a large CPU hit that isn't present when serving files.

And if you're using bytea, your app server has to allocate memory to hold at 
least one full copy of the file (I seem to recall that it works out to 2 
copies, actually, but it's been a while since I tried it). Most languages 
aren't good about releasing that memory, so that hit stays around until the 
process gets recycled.

For a low volume app, both might be acceptable - any modern CPU can swamp most 
outbound bandwidth even while decoding bytea. But it is a large amount of 
overhead compared to a web server just dumping files into a network buffer 
straight from disk cache.

Also, maintaining large tables still sucks. You can partition them to make 
things friendlier. pg_upgrade makes things nicer, but it can't always be used, 
so major version upgrades can still be a problem. 

On the plus side, all your data is in one place, which makes it cluster-
friendly and easy to delete files when needed, and makes taking consistent 
backups much simpler.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Processor usage/tuning question

2014-10-03 Thread Alan Hodgson
On Friday, October 03, 2014 11:24:31 AM Israel Brewster wrote:
 I have a Postgresql 9.3.5 server running on CentOS 6.5. In looking at some
 stats today, I saw that it was handling about 4-5 transactions/second
 (according to the SELECT sum(xact_commit+xact_rollback) FROM
 pg_stat_database; query), and an instance of the postmaster process was
 consistently showing 40%-80% utilization to handle this. I didn't think
 anything of that (the machine has plenty of capacity) until I mentioned it
 to a friend of mine, who said that utilization level seemed high for that
 many transactions. So if that level of utilization IS high, what might I
 need to tune to bring it down to a more reasonable level?
 

You probably have some read queries not properly indexed that are sequentially 
scanning that 1.2 million row table over and over again. Enable slow query 
logging and see what's going on.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Where art thou, plpython2.dll? (EDB installer)

2014-09-23 Thread Alan Hodgson
On Tuesday, September 23, 2014 02:05:48 PM Nick Guenther wrote:
 I uninstalled all the postgres subpackages and rebuilt them from
 ports, and ended up with an identical plpython2.so, which has these
 checksums:
 SHA256 (/usr/local/lib/postgresql/plpython2.so) =
 8c7ff6358d9bf0db342e3aca1762cd7c509075a6803b240291d60a21ca38198b
 MD5 (/usr/local/lib/postgresql/plpython2.so) =
 bb6122f27f48f0b3672dbc79cef40eea
 SHA1 (/usr/local/lib/postgresql/plpython2.so) =
 4dd79641cbad3f71466648559d74e6b0c4f174a3
 
 Any other OpenBSD users that have run into this, here?

I'm not a current BSD user, but I'd suggest ldd'ing that library and see if it 
has any shared libraries it can't find or if it references shared libraries 
that are not in whatever OpenBSD uses for a library search path (ie. the 
equivalent of Linux's ld.so.conf), or if any of those shared libraries have 
permissions issues.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] large table

2014-09-22 Thread Alan Hodgson
On Monday, September 22, 2014 11:17:05 AM Luke Coldiron wrote:
 The actual size of the table is around 33 MB.
 The myFunc function is called every 2.5 seconds and the wasUpdated function
 every 2 seconds by separate processes. I realize that running a FULL VACUUM
 or CLUSTER command on the table will resolve the issue but I am more
 interested in a root cause that explains why this table would end up in
 this state. I have tried to reproduce this issue by running the exact setup
 and have not been able to get the table to grow like this example. Any
 plausible cause'es or explanations would be much appreciated. Luke

I'd guess that some other process held a transaction open for a couple of 
week, and that prevented any vacuuming from taking place.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help related to Postgresql for RHEL 6.5

2014-08-28 Thread Alan Hodgson
On Friday, August 29, 2014 04:14:35 AM Yogesh. Sharma wrote:
 Dear David,
 
  Are you currently using PostgreSQL?
 
 Currently we are using PostgreSQL 8.1.18 version on RHEL 5.8.
 Now we plan to update this to PostgreSQL 9.0 version with  RHEL6.5. As in
 verion 9.0 I found least Compatibilities.
 

Any of the currently maintained PostgreSQL versions will run fine on RHEL 6.5 - 
that would be the latest release of any version from 9.0 up. 

Only you can test and find out if your application(s) will need changes to work 
with those versions.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrade to 9.3

2014-07-21 Thread Alan Hodgson
On Monday, July 21, 2014 09:05:18 PM Karthik Iyer wrote:
 Hello,
 
  We are planning to upgrade Postgres from 9.0 to 9.3. Was wondering
 if there are any serious changes that I have to look out for
 (syntax/datatypes changes) so that my code does not break.
 

http://www.postgresql.org/docs/9.1/static/release-9-1.html
http://www.postgresql.org/docs/9.3/static/release-9-2.html
http://www.postgresql.org/docs/9.3/static/release-9-3.html



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to store fixed size images?

2014-06-19 Thread Alan Hodgson
On Thursday, June 19, 2014 10:21:56 AM Andy Colson wrote:
 On 06/16/2014 08:10 PM, sunpeng wrote:
  We have many small size(most fixed size) images, how to store them? There
  are two options: 1. Store images in folders, managed by os file system,
  only store path in postgresql 2. Store image as bytea in postgresql
  How do you usually store images?
  Thanks!
  
  peng
 
 I think it depends on how you are going to use them.  I, for example, have
 lots of images that are served on a web page, after benchmarks I found it
 was faster to store them on filesystem and let apache serve them directly.
 
 -Andy

That will always be the (much) faster option. There's basically no CPU 
overhead, the web server can tell the kernel to copy the image from the 
filesystem cache directly into a network buffer, and off it goes. Even apache 
can 
serve line speed like that.

It's a lot easier to manage the images if they're in the database, though, 
especially if you run off multiple web servers. If CPU overhead is actually an 
issue, you can eliminate most of the speed hit by sticking a caching proxy 
server like Varnish in front of your site, or by offloading the image serving 
to 
a pass-through CDN. Just make sure images get a new URL path if they change 
content.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_standby replication problem

2014-06-09 Thread Alan Hodgson
On Monday, June 09, 2014 04:28:53 PM Khangelani Gama wrote:
 Please help me with this, my secondary server shows a replication problem.
 It stopped at the file called *00054BAF00AF …*then from here
 primary server kept on sending walfiles, until the walfiles used up the
 disc space in the data directory. How do I fix this problem. It’s postgres
 9.1.2.
 

It looks to me like your archive_command is probably failing on the primary 
server. If that fails, the logs will build up and fill up your disk as 
described. And they wouldn't be available to the slave to find.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_standby replication problem

2014-06-09 Thread Alan Hodgson
On Monday, June 09, 2014 08:05:41 PM Khangelani Gama wrote:
 Hi All
 
 I would like to re-post the problem we have. The secondary server ran out
 the disc space due the replication problem (Connection Time out).

The secondary server would not (could not) run out of drive space due to a 
problem on the primary. You probably need to figure out why that server is out 
of drive space and fix it, and then I expect your replication problem will fix 
itself. If you do not have a process cleaning up old archived WAL files, and 
those are stored on the secondary, that could be the source of your problem.

If you also have a separate networking issue (for the connection timeout), 
then you might need to fix that, too.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bytea Issue - Reg

2014-06-06 Thread Alan Hodgson
On Wednesday, June 04, 2014 10:49:18 PM sramay wrote:
 relation public.file_attachments does not exist

.. is almost certainly not a size problem. What does your PostgreSQL log say? 
I suspect your app is connecting to the wrong database.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CentOS 6 and Postgresql 9.3.4 from PGDG

2014-04-14 Thread Alan Hodgson
On Monday, April 14, 2014 09:13:51 AM Steve Clark wrote:
 How did you deal with binaries and libraries, as well as third party apps
 like perl modules or php/apache modules?

The 8.4 library package usually ends up installed to satisfy other package 
requirements.

Binaries get handled through the alternatives system.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Linux vs FreeBSD

2014-04-09 Thread Alan Hodgson
On Wednesday, April 09, 2014 09:02:02 PM Brent Wood wrote:
 Given the likely respective numbers of each OS actually out there, I'd
 suggests BSD is very over-represented in the high uptime list which is
 suggestive.

Suggestive of ... sysadmins who don't do kernel updates?



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it safe to stop postgres in between pg_start_backup and pg_stop_backup?

2014-04-03 Thread Alan Hodgson
On Thursday, April 03, 2014 02:48:03 PM Steven Schlansker wrote:
 On Apr 2, 2014, at 3:08 PM, Jacob Scott jacob.sc...@gmail.com wrote:
   • pg_start_backup
   • Take a filesystem snapshot (of a volume containing postgres data but 
 not
 pg_xlog) • pg_stop_backup
   • pg_ctl stop
   • Bring a new higher performing disk online from snapshot
   • switch disks (umount/remount at same mountpoint)
   • pg_ctl start

... with a recovery.conf in place when starting the new instance.

 
 Assuming you ensure that your archived xlogs are available same to the new
 instance as the old

And make sure they're archived to a different disk.

 Another option you could consider is rsync.  I have often transferred
 databases by running rsync concurrently with the database to get a “dirty
 backup” of it.  Then once the server is shutdown you run a cleanup rsync
 which is much faster than the initial run to ensure that the destination
 disk is consistent and up to date.  This way your downtime is limited to
 how long it takes rsync to compare fs trees / fix the inconsistencies.
 

This would be simpler.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it possible to pip pg_dump output into new db ?

2014-03-25 Thread Alan Hodgson
On Tuesday, March 25, 2014 02:56:48 PM Frank Foerster wrote:
 Hi,
 
 we are currently in the process of upgrading a production/live 1 TB
 database from 9.2 to 9.3 via pg_dump, which is quite a lengthy process.
 
 Fortunately we have a capable spare-server so we can restore into a clean,
 freshly setup machine.
 
 I just wondered wether the intermediate step of writing the dump-file and
 re-reading it to have it written to the database is really necessary. Is
 there any way to pipe the dump-file directly into the new
 database-process or would such functionality make sense ?
 
 I can only speak for us, but each time we do a dump/restore we need to
 extract/copy/move very large files and piping directly into something like
 psql/pg_restore on another machine etc. would greatly reduce
 upgrade-time/pain.
 
 Thanks and best regards,
 
 Frank

Sure. For maximum speed, something like:

pg_dump [options] source_db | pigz - | ssh -e none user@target gunzip - | 
psql [options] target_db

Depending on your hardware, though, doing a custom backup to a target file and 
then using it for a parallel restore would probably overall end up being 
faster, plus you get to keep the backup if needed. In my experience, the 
restore is a lot slower than the backup.

Slony is also great, to save most of the downtime. At the expense of a lot of 
setup and testing time.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Point In Time Recovery

2013-10-25 Thread Alan Hodgson
On Thursday, October 24, 2013 11:13:34 PM Jayadevan wrote:
 Alan Hodgson wrote
 
  That's basically what warm standby's do, isn't it? As long as they keep
  recovery open it should work.
 
 A warn standby will be almost in sync with the primary, right? So recovery
 to point-in-time (like 10 AM this morning) won't be possible. We need a
 base, but it shouldn't be so old that it takes hours to catchup- that was my
 thought. As John mentioned, looking at the WAL/transaction numbers, time to
 recover etc need to be looked at.
 

Well, yeah. The point was that you possibly could run it for a while to catch 
up without taking a new base backup if you desired. You should also keep 
copies of it for PITR.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Point In Time Recovery

2013-10-24 Thread Alan Hodgson
On Thursday, October 24, 2013 11:31:38 AM John R Pierce wrote:
 On 10/24/2013 9:47 AM, Jeff Janes wrote:
  I restore from my base backup plus WAL quite often.  It is how I get a
  fresh dev or test instance when I want one.  (It is also how I have
  confidence that everything is working well and that I know what I'm
  doing should the time come to do a real restore).  When that starts to
  take an annoyingly long time, I run a new base backup.  How often that
  is, can be anywhere from days to months, depending on what's going on
  in the database.
 
 hey, silly idea formed on half a cup of coffee  if that base backup
 is in the form of a copy of the data directory (as opposed to tar.gz or
 something), could you 'update' it by pointing an instance of postgres at
 it, and then playing the WAL archive into it, then shutting that
 instance down?   or would it be impossible to synchronize the ongoing
 new WAL's from the master with the timeline of this?

That's basically what warm standby's do, isn't it? As long as they keep 
recovery open it should work.

You can also use rsync to take your base backup - just update the rsync copy. 
That's what I do (and keep a separate tarball of that rsync copy, made on the 
backup host).



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres DB crashing

2013-06-20 Thread Alan Hodgson
On Thursday, June 20, 2013 07:52:21 AM Merlin Moncure wrote:
 OP needs to explore use of connection pooler, in particular pgbouncer.
  Anyways none of this explains why the server is actually crashing.

It might be hitting file descriptor limits. I didn't dig into the earlier part 
of this thread much, though.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trouble with Postgresql RPM

2013-01-23 Thread Alan Hodgson
On Wednesday, January 23, 2013 09:10:40 AM Ian Harding wrote:
 The System:
 
 Linux beta 2.6.32-279.19.1.el6.x86_64 #1 SMP Wed Dec 19 07:05:20 UTC 2012
 x86_64 x86_64 x86_64 GNU/Linux
 

That looks like a CentOS 6 system.

Go to http://yum.postgresql.org/repopackages.php

Find the repo appropriate for your distribution. Download the repo rpm, 
install that, which will add the repository to your yum configuration.

Install PostgreSQL from there.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_Restore

2013-01-21 Thread Alan Hodgson
On Monday, January 21, 2013 08:40:05 PM bhanu udaya wrote:
 Hello,Thanks alot for all your replies. I tried all settings suggested, it
 did not work. pg_restore is very slow. It does not come out less than 1 1/2
 hour. Can you please let me know the procedure for Template. Will it
 restore the data also . Please update. I need the database (metadata +
 data) to be restored back after certain Java executions.  Ex:- I have 9.5
 gb database (with 500 schemas + data). This is treated as base database and
 it needs to be restored every time after certain transactions. 

Don't use pg_restore, do the backups/restorations outside of PostgreSQL:

- run on a server with a snapshot-capable volume manager, use that for quick 
restores 
- just try using rsync from a backup copy of the base data directory 

(either of the above require PostgreSQL to not be running during the 
restorations)



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table with million rows - and PostgreSQL 9.1 is not using the index

2012-12-04 Thread Alan Hodgson
On Wednesday, December 05, 2012 02:44:39 AM Edson Richter wrote:
 Sort  (cost=11938.72..11938.74 rows=91 width=93)
Sort Key: t0.nome
-  Nested Loop  (cost=0.00..11938.42 rows=91 width=93)
  -  Nested Loop  (cost=0.00..11935.19 rows=91 width=85)
-  Seq Scan on logradouro t2  (cost=0.00..11634.42
 rows=91 width=81)
  Filter: ((cep)::text ~~ '81630160%'::text)

According to that the logradouro table only has 91 rows, which is why it seq-
scanned it. Has it been analyzed? 

Also, partial text matches require a special index declaration, as I recall. 
Maybe post a \d of each table to help troubleshoot this.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] lock database share

2012-11-05 Thread Alan Hodgson
On Monday, November 05, 2012 05:15:41 AM salah jubeh wrote:
 Hello,
 
 I have the following scenario, I would like to upgrade a database server
 from 8.3 to 9.1. The upgrade includes also a hardware upgrade.
 
 
 I would like to have the following
 
 1. Make sure that the upgraded server and the old server have the same
 snapshot of data. 2. Make sure that the old server data is accessible for
 read only after taking the data snapshot until making the new server alive
 and tested
 
 
 The lock database command is not yet implemented, so what are the ways to
 hack this command.  Also,  I need to make sure that the server is down for
 very short time, so is there a better way than using pg_dump and pg_restore
 for different database server versions ?
 
 Regards

Using Slony to assist in your migration will meet all your requirements:

http://slony.info/

You should read the documentation thoroughly.  And try everything out on test 
machines before doing it to your production servers.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.1 to 9.2 requires a dump/reload?

2012-10-22 Thread Alan Hodgson
On Monday, October 22, 2012 05:55:07 PM Nikolas Everett wrote:
 I was just looking at
 http://www.postgresql.org/docs/devel/static/release-9-2.html and it
 mentioned that a dump/reload cycle was required to upgrade from a previous
 release.  I just got done telling some of my coworkers that PG had been
 bitten by this enough times that they were done with it.  Am I wrong?  Is
 this normal?
 
 I see that pg_upgrade is an option.  Having never used how long should I
 expect pg_upgrade to take?  Obviously we'll measure it in our environment,
 but it'd be nice to have a ballpark figure.

pg_upgrade using hard links should only take a minute or 2. You'll also need 
to shuffle around packages and services and config files. The slowest part for 
any 
decent sized database will be doing an analyze after bringing it up under 9.2, 
though. So however long that takes for your db, plus maybe 10-15 minutes or 
so, if you've practiced.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] allow servers to access to the same data

2012-10-17 Thread Alan Hodgson
 is it that you want?
 I've come across a few mentions of Heartbeat being used for PostgreSQL
 failover, do have any links to more information about this?

If you're going to use Heartbeat on a 2-server setup, you should use DRBD for 
the replication, not the PostgreSQL replication. DRBD basically does the 
equivalent of RAID-1 mirroring between 2 servers.

http://www.linuxjournal.com/article/9074 is dated but probably still covers 
everything you need to do to make it work. 

If you have questions about setting this up, you should find a linux clustering 
group to ask. It can be quite complex if you aren't very familiar with Linux 
system administration, and this is not a good forum for the followups.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Securing .pgpass File?

2012-10-01 Thread Alan Hodgson
On Monday, October 01, 2012 03:10:43 PM Shaun Thomas wrote:
 I can't remember about Puppet since I haven't used it in so long, but
 bcfg2 is basically just a giant directory structure, and we put ours in
 GIT for safekeeping and to track changes. Implementing ACLs in GIT is a
 bit of a PITA, so we're avoiding that as a last resort.

You could maybe put the pgpass stuff in a separate repo with different 
permissions and set it up as as submodule. Haven't tried it.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple indexes, huge table

2012-09-06 Thread Alan Hodgson
On Thursday, September 06, 2012 05:06:27 PM Jeff Janes wrote:
 For updating 20 million out of 500 million rows, wouldn't a full table
 scan generally be preferable to an index scan anyway?
 

Not one table scan for each row updated ...



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tigger after delete with plpgsql

2012-08-31 Thread Alan Hodgson
On Friday, August 31, 2012 02:10:47 PM Fellipe Henrique wrote:
 Hello, I`m try to use this code for my After Delete trigger:
 
   delete
   from MOVIMENTO
   where (IDEMPRESA = OLD.idempresa) and
 (upper(TABELA) = 'NFENTRADA_ITEM') and
 (CODIGO = OLD.idempresa_item);
 
 But, when execute. show me error.. that`s says
 
 old is not initialize (sorry,but I dont know how exactly error msy
 because in my SO, give in portuguese)..
 
 my question is:  how can I use OLd and NEW in posgre?
 

Just like that. Which implies something else is wrong. Please post the whole 
trigger function and a \d of the table where this trigger is used, and the SQL  
that you are executing that results in the error.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tigger after delete with plpgsql

2012-08-31 Thread Alan Hodgson
On Friday, August 31, 2012 03:14:15 PM Fellipe Henrique wrote:
 CREATE TRIGGER nfentrada_item_tr1
   BEFORE DELETE
   ON public.nfentrada_item FOR EACH ROW
   EXECUTE PROCEDURE public.nfentrada_item_ad0();
 
 here is my nfentrada_item_ad0();
 
 delete
   from MOVIMENTO
   where (IDEMPRESA = OLD.idempresa) and
 (upper(TABELA) = 'NFENTRADA_ITEM') and
 (CODIGO = OLD.idnfentrada_item);
   return old;
 
 I just want to delete all row in my MOVIMENTO table with these
 conditionals..
 

And the error appears when you do a DELETE FROM public.nfentrada_item? Can you 
post the entire function declaration with the CREATE comand?



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tigger after delete with plpgsql

2012-08-31 Thread Alan Hodgson
On Friday, August 31, 2012 12:12:32 PM fellipeh wrote:
 Yes, the error appears when I delete nfentrada_item row
 View this message in context:
 http://postgresql.1045698.n5.nabble.com/Tigger-after-delete-with-plpgsql-tp
 5722154p5722173.html Sent from the PostgreSQL - general mailing list archive
 at Nabble.com.

Well ... I don't know. That runs fine here. Can you post the actual error 
message you get? Or a \d on both of those tables.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tigger after delete with plpgsql

2012-08-31 Thread Alan Hodgson
On Friday, August 31, 2012 12:41:42 PM fellipeh wrote:
 Here is error msg:
 http://postgresql.1045698.n5.nabble.com/file/n5722180/erro_PG.png
 
 sorry, but in portuguese..
 
 
 
 --
 View this message in context:
 http://postgresql.1045698.n5.nabble.com/Tigger-after-delete-with-plpgsql-tp
 5722154p5722180.html Sent from the PostgreSQL - general mailing list archive
 at Nabble.com.

What is the function movimento_ad0()?




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql unix env variables

2012-08-29 Thread Alan Hodgson
On Wednesday, August 29, 2012 12:35:32 PM Little, Douglas wrote:
 Is there a method for having unix env variables incorporated into a psql sql
 statement? Ie
 Export var='dev'
 Psql =c 'select count(*) from $var.customer;'
 

Use double-quotes, not single-quotes. Bash won't interpolate variables into 
single-quoted strings.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Which postgresql should i choose to migrate from 8.3 ? 8.4 or 9.1?

2012-08-28 Thread Alan Hodgson
On Tuesday, August 28, 2012 11:52:26 AM Ing.Edmundo.Robles.Lopez wrote:
 Please, could you help to give peace of mind to my boss and make sure
 that the version we choose to be as smooth as possible compatible with
 version 8.3?
 

All compatibility changes are indicated in the upgrade notes from one version 
to another. Read them, see if they apply to you.

Install 9.1 on a test server. Test it with your data. It's free :)

 indeed to restore the dump of the 83 in 9.1 had a bug in UTF-8, will I
 have the same problem with the 8.4???

Probably. If you have data in your UTF-8 database that is not valid UTF-8 you 
will eventually need to clean it up.

Also, some of the default config options have changed (in particular bytea 
encoding options); they were the only issues I ran into when upgrading. But 
only you know what features your applications use.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Amazon High I/O instances

2012-08-22 Thread Alan Hodgson
On Wednesday, August 22, 2012 04:10:01 PM Andrew Hannon wrote:
 Just looking into High IO instances for a DB deployment. In order to get
 past 1TB, we are looking at RAID-0. I have heard
 (http://hackerne.ws/item?id=4266119) there might be a problem if TRIM isn't
 supported. Does anyone know if it is and has anyone used RAID-0 on these
 instances? (Linux of course…)

Just use LVM striping. If it turns out to be an issue; that seems to be mostly 
conjecture.

I note that the SSDs are only instance storage. The data will be gone when the 
instance goes away. I have used instance storage in replicated setups but it 
always feels rather fragile unless your data really is transient or you can 
maintain 2 replicas.

Their other new service, provisioned IOPS for EBS, might be more useful for a 
persistent database. Although not nearly SSD speeds, of course.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Migrating from 8.3 to 9.1 - date/time storage types do not match

2012-08-22 Thread Alan Hodgson
On Wednesday, August 22, 2012 05:17:10 PM Scott Briggs wrote:
 So we have a large TB database that we need to migrate to 9.1 and I'm
 wondering if there's a way to do this process in stages.
 
 Since the date/time storage types changes between 8.3 and 8.4, I
 realize we'll have to dump the database and my plan is to create a
 backup using pg_start_backup(), run that backup on another server
 using postgres 8.3 which I could then dump and import into a 9.1
 server.
 
 My question is, is there some way I can take WAL files created since
 the beginning of the original backup on the 8.3 server, convert those
 into text that I could then apply to the 9.1 server?  The mysql
 equivalent to this is mysqlbinlog which you can apply to mysql binary
 log files.

The most likely way to get this done is with Slony. Setup a Slony slave, 
upgrade the slave to 9.1 with a dump/reload, run it and let Slony catch it up, 
and then promote it to be the Slony cluster master and switch your clients 
over.

Slony is table based and certainly not trivial to setup, but it'll work. If 
your change rate isn't too horrendously high, anyway.

The Slony tables need some updating after a dump/reload to fix OID issues. 
There are tools included to do that. You'd need to experiment to nail down the 
process. And you would want to make sure no one is making DDL changes outside 
of Slony during the whole process.

If you aren't already familiar with Slony, this is probably weeks of work to 
fully prepare for and get right. But it will let you avoid the downtime.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Migrating from 8.3 to 9.1 - date/time storage types do not match

2012-08-22 Thread Alan Hodgson
On Wednesday, August 22, 2012 02:43:05 PM Alan Hodgson wrote:
 The most likely way to get this done is with Slony. Setup a Slony slave,
 upgrade the slave to 9.1 with a dump/reload, run it and let Slony catch it
 up, and then promote it to be the Slony cluster master and switch your
 clients over.
 

Or, of course, just setup a clean 9.1 and let Slony populate it. Duh. Too long 
since I did that ...



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] REINDEX and COPY is wainting since Jun 21!

2012-07-27 Thread Alan Hodgson
On Friday, July 27, 2012 11:59:49 AM Ing.Edmundo.Robles.Lopez wrote:
 Hi, i have postgres 8.3 running in SCO Openserver 5.0.7
 
 today, i had  problems to start psql, the error mesage   was: 'FATAL
 Memory out, Detail: Failed on resqueted size ...' ,  and after i checked
 the process  i  noticed the following:
 
 the elder  REINDEX   had been running since Jun-21 by  35 days 12 hrs
 (35-12:12:08) WTF

Well it looks like either the COPY or the VACUUM have been running since the 
20th and are blocking the other processes. Is that a VACUUM FULL perhaps?

 
 Can  i kill  safely these process?
 there is any risk to corrupt the database???

I would use pg_cancel_backend() on them. Supposed to be safe, and I've never 
had corruption issues.

-- 
When the Athenians finally wanted not to give to society but for society to 
give to them, when the freedom they wished for most was freedom from 
responsibility, then Athens ceased to be free and was never free again.” -- 
Edward Gibbon


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trouble with NEW

2012-07-18 Thread Alan Hodgson
On Wednesday, July 18, 2012 11:07:34 AM Bob Pawley wrote:
 Hi
 
 I would appreciate some fresh eyes on this expression -
 
 update p_id.fluids
 set fluid_short =
 (select shape.text
 from  shape, num_search
 where (select st_within(shape.wkb_geometry,
 st_geometryn(num_search.the_geom4, 1)) = 'true') and text !~ '[0-9]')
 where p_id.fluids.fluid_id = NEW.fluid_id;
 
 I receive an error ”record “new” has no field “fluid_id” “.
 

Is that run within an insert or update trigger function? Does the table it's 
on have a field named fluid_id?

 Bob

-- 
When the Athenians finally wanted not to give to society but for society to 
give to them, when the freedom they wished for most was freedom from 
responsibility, then Athens ceased to be free and was never free again.” -- 
Edward Gibbon


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trouble with NEW

2012-07-18 Thread Alan Hodgson
On Wednesday, July 18, 2012 12:28:00 PM Bob Pawley wrote:
 It's an insert after trigger function.
 
 The table has a column named fluid_id.
 
 Bob

Could you post the whole function? And a \d on the table? 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] efficiency of wildcards at both ends

2012-06-20 Thread Alan Hodgson
On Wednesday, June 20, 2012 01:10:03 PM Sam Z J wrote:
 Hi all
 
 I'm curious how is wildcards at both ends implemented, e.g. LIKE '%str%'
 How efficient is it if that's the only search criteria against a large
 table? how much does indexing the column help and roughly how much more
 space is needed for the index?
 

Indexing helps not at all. If the search string starts with a wildcard you 
will always get a sequential scan of the whole table.

Look at the full text search documentation for a better approach.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres no longer starts

2012-05-30 Thread Alan Hodgson
On Wednesday, May 30, 2012 08:22:58 PM Bart Lateur wrote:
 Luckily this is a development machine, but as we don't know what causes
 the problem we fear we might one day face the exact same problem where
 it does matter: on a production machine. So we'd like to know exactly
 what went wrong..

Change selinux to permissive instead of enforcing and see if PostgreSQL then 
runs. If it does, you can look at the selinux logging to see what would have 
been denied in enforcing mode, and hopefully fix it from there.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reasons for postgres processes beeing killed by SIGNAL 9?

2012-05-22 Thread Alan Hodgson
On Saturday, May 19, 2012 04:42:16 PM Clemens Eisserer wrote:
 Hi again,
 
 We are still constantly getting postgresql processes killed by signal
 9 from time to time, without any idea why or how.
 Syslog seems completly clean.
 
 In case a postgresql process would exceed some restricted resources
 like file descriptors, would the kernel choose to terminate it using
 SIGKILL? Are there any other common examples / occurences where
 processes are terminated this way automatically?

Check dmesg or the kernel log. I'd guess it's the OOM-killer. Assuming this is 
on Linux, that is.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: invalid byte sequence for encoding UTF8: 0xc325

2012-03-30 Thread Alan Hodgson
On Friday, March 30, 2012 10:00:31 AM Prashant Bharucha wrote:
 Hello All
 
 Could you help me to automatically convert all db request into UTF8 encode ?
 

Set your session client_encoding to match your data. That's about as close as 
you can get to automatic.

  http://www.postgresql.org/docs/9.1/static/multibyte.html

A better bet is to make your application end-to-end UTF8, which is doable, 
more or less, in a web environment, although misbehaving clients will still 
sometimes send you bad data. For any other data source (especially email) 
you'll probably get tons of badly encoded data.

If you're looking for a silver bullet, there isn't one.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] where to find initdb log file?

2012-03-30 Thread Alan Hodgson
On Saturday, March 31, 2012 01:52:37 AM clover white wrote:
 HI, i have a problem when using pg, thanks for help. :)
 
   I used command initdb, but nothing was created in my pgdata directory.
  however, I used command ps to list all the processes, and I found inidb
 process kept running all the time.
 
 I don't know what happened, and i want to read the log file of initdb.
 
 Could someone tell me where the log is? thank you.
 
 my pg version is 9.1.2

I don't believe there is a log. initdb logs what it's doing on stdout and then 
finishes.

It does have a debug option to print more info while running.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Facing error while restoring the database

2012-03-27 Thread Alan Hodgson
On Tuesday, March 27, 2012 08:20:23 PM Akshay Joshi wrote:
   In my case I won't allow anyone to insert/modify the rows of sample_one
 table. I have already inserted some rows in sample_one table where I
   want one constraint is number of array elements of sample_one.param_names
 and sample.params must be same. That's why I have created
   CHECK constraint in sample table. User can insert, modify and delete the
 rows of sample table, so I don't want any mismatch in the number of
   array elements of sample_one.param_names and sample.params table.
 

Well you have a couple of options.

1) Redesign to use a trigger instead of a check constraint.

2) During restore, you can use a custom table of contents extracted from the 
dump, remove the constraint from that, do your restore, and then create the 
constraint manually afterwards. This requires that you use the custom dump 
format and pg_restore, of course.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to recover data from cluster

2012-02-16 Thread Alan Hodgson
On Thursday, February 16, 2012 09:18:34 PM Andrus wrote:
 PostgreSql 8.4 in windows crashes. After that Windows disk repairing was
 used to repair hard drive. After that Data/base directory from crashed
 server contains lot of files, all files are readable.
 
 PostgreSql 8.4 was reinstalled in new server and data directory was set to
 directory from crashed server.
 
 pgAdmin shows that there is only public schema without any tables.
 Actually there was two schemas with lot of tables.
 
 How to fix this so old data is accessible ?
 

It sounds like your new installation is not in fact using the old data 
directory, but a new empty one. 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] initdb $PGDATA not working

2012-02-09 Thread Alan Hodgson
On Thursday, February 09, 2012 10:25:51 PM Daniel Vázquez wrote:
 Hi!
 
 I've set my PGDATA variable in profile
 export PGDATA=/home/mydata/pgsql/data
 
 Testing variable for correct set in enviroment
 $ echo $PGDATA
 /home/mydata/pgsql/data
 
 but when execute:
 $ sudo /etc/init.d/postgresql-9.1 initdb
 
 or execute:
 $ sudo /etc/init.d/postgresql-9.1 initdb -D /home/mydata/pgsql/data
 
 The cluster always is created on default path /var/lib/pgsql/9.1/data/
 
 Will install in CentOS 5.7

The init script doesn't take command line options except the initdb action. 
And it sets PGDATA itself.

Perhaps su postgres -c initdb -D /home/mydata/pgsql/data or something 
similar might work better. Or modifying the init script before using it, which 
would have the advantage that it might also start the server later using your 
new data directory.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Puzzling full database lock

2012-02-01 Thread Alan Hodgson
 On Wed, Feb 1, 2012 at 7:38 PM, Christopher Opena counterv...@gmail.com 
wrote:
  Hello folks,
  
  We've been running into some very strange issues of late with our
  PostgreSQL database(s).  We have an issue where a couple of queries
  push high CPU on a few of our processors and the entire database locks
  (reads, writes, console cannot be achieved unless the high CPU query
  procs are killed).  Further investigation shows ~59% total cpu usage
  (we have 16 total cores), low io, and mid-to-low memory usage (we have
  74GB of memory, shared_buffers=16GB).

Define low I/O. The only things I've ever seen interfere with console access  
are running out of memory or excessive disk i/o. If you're seeing even 6-12% 
iowait on a 16 core machine you might very well have swamped your available 
disk I/O.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Puzzling full database lock

2012-02-01 Thread Alan Hodgson
On Wednesday, February 01, 2012 05:13:15 PM Christopher Opena wrote:
 Do you mean 6-12% of total iowait, or per cpu?  Our average iowait in the
 last week is 34.31% of a total 1600% with an average idle of 1451.76%.  Our
 iowait *does* spike occasionally (today it went up to 148.01%) but it
 doesn't coincide with the lock happening.  At the time of the lock we were
 at 10.58% iowait, which is quite a bit below our average.
 

Total, but it doesn't sound like that's the problem.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG 9.0 EBS Snapshot Backups on Slave

2012-01-23 Thread Alan Hodgson
On Monday, January 23, 2012 07:54:16 PM Andrew Hannon wrote:
 It is worth noting that, the slave (seemingly) catches up eventually,
 recovering later log files with streaming replication current. Can I trust
 this state?
 

Should be able to. The master will also actually retry the logs and eventually 
ship them all too, in my experience.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] HA options

2012-01-17 Thread Alan Hodgson
On Tuesday, January 17, 2012 10:34:54 AM Tim Uckun wrote:
 http://www.drbd.org/ ??
 Built in hot standby and hand rolled scripts.
 

I have a few clusters running on EC2 using DRBD to replicate between 
availability zones. It's not fast, but it works. If your write load is under 
30MB/sec it's definitely an option. I run DRBD over SSH tunnels to get around 
the random IP address issue. I use heartbeat on top for resource 
management/failover (I know it's not ideal with the single communications 
path, but what can you do), and DDNS with really short TTLs.

I've also had success using the native replication to spin read-only slaves off 
those HA masters, using pgpool to load balance selects.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] HA options

2012-01-16 Thread Alan Hodgson
On Tuesday, January 17, 2012 10:34:54 AM Tim Uckun wrote:
 Hey Guys.
 
 It's been a while since I looked into this and it seems like new
 options have cropped up for postgres HA and scalability.   Is there a
 consensus on the best way to achieve HA. My primary concern is HA
 but of course any scalability gains would be more than welcome.  All
 the servers will be virtual on either rackspace or amazon so that's
 possibly a complication.
 
 http://www.drbd.org/ ??
 Built in hot standby and hand rolled scripts.

I have a few clusters running on EC2 using DRBD to replicate between 
availability zones. It's not fast, but it works. If your write load is under 
30MB/sec it's definitely an option. I run DRBD over SSH tunnels to get around 
the random IP address issue. I use heartbeat on top for resource 
management (I know it's not ideal with the single communications 
path, but what can you do), and DDNS with really short TTLs for the cluster 
names.

I've also had success using the native replication to spin read-only slaves off 
those HA masters, using pgpool to load balance selects.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql allow connections without password

2012-01-12 Thread Alan Hodgson
On Thursday, January 12, 2012 02:16:04 AM debian nick wrote:
 Any help will be really appreciate.

Check your pg_hba.conf file for any entries with trust or ident. Remove 
them and restart the server.

Also look for .pgpass files in the home directories of any user seeing this.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] indexes and tables

2011-12-19 Thread Alan Hodgson
On Sunday, December 18, 2011 04:00:14 PM amit sehas wrote:
 Yes i was trying to determine how to make a View work in this situation.
 From reading the details on PostgreSQL Views are not persistent, ie they
 are just a SQL query short hand rather than actually creating any physical
 entity backing it (i would imagine that creating such a physical backing
 would be inordinately difficult to keep updated given the arbitrary
 original query that was utilized to define it...)...
 

The views are not materialized. But the SQL they execute does make use of  
existing indexes.

 Is there some way inheritence can play a part in this such as
 
 a) define a base table with 3 fields and place indexes on each one of the
3 fields
 
 b) define the 10 types in which we just override the name of each one of
 the 3 fields (i do not know if there is such a concept as overriding the
 name of a field...) perhaps a computed field which depends upon the field
 inherited...
 
 that is as close as i could think of how to solve such as issue...but i am
 not sure how inheritance works in POstgres...

Inheritance will not help you with columns with different names. And indexes 
aren't inherited anyway.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CPU move

2011-11-26 Thread Alan Hodgson
On Saturday, November 26, 2011 10:18:56 AM Carlos Henrique Reimer wrote:
 Hi,
 
 We're planning to move our postgreSQL database from one CPU box to another
 box.
 
 I'm considering an alternative procedure for the move as the standard one
 (pg_dump from the old, copy dump to the new box, psql to restore in the
 new) will take about 10 hours to complete. The ideia is installing the same
 Linux and PostgreSQL versions in the new box and copy the entire database
 cluster directory from the old to the new one using the scp Linux command.
 
 Checked the locale files and they were not customized and are the same in
 the same box.
 
 Which risks am I exposed in this approach?

This will work, if the versions are the same and the source database is 
stopped before you start the copy. Make sure you fix up the permissions 
afterwards.

It would be faster to use rsync - you can run a first pass with the source 
database running, then stop it and run a last rsync -ar --delete to pick up 
any changes. Your only downtime need be during the last pass.

Make sure you've copied over any sysctl.conf settings.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] can I show number of records returned by a query in a log?

2011-11-16 Thread Alan Hodgson
On November 16, 2011 05:07:05 PM hyelluas wrote:
 I see 900 queries sent by 1 client in 7 min with 1 click on the screen -
 does the log show the real thing?

The logs show the real thing.  Gotta love ORMs.

 
 
 Is it possible to log the number of records returned by that query?

I doubt the record count or data volume is the problem. It's more likely the 
latency cost of sending 900 queries one at a time and waiting for the replies 
at VPN latencies. 

I don't know how to log the result record count, though, maybe someone else 
does.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Regarding the shared disk fail over cluster configuration

2011-11-02 Thread Alan Hodgson
On November 2, 2011 08:55:39 AM Debasis Mishra wrote:
 My doubt is - Whether cluster should start the postgres service in
 secondary node during failover or postgress will be running always. My
 undersatnding was in both the node postgress will be running and pointing
 to shared dbdata. And if node1/primary fails then cluster software should
 bring node2/secondary up.once it will bring node2 online there postgress
 must be running already.

You definitely cannot have 2 postmasters running against the same shared disk. 
The cluster software will need to start PostgreSQL on the secondary node after 
taking over the IP address and shared disk resource.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server move using rsync

2011-11-01 Thread Alan Hodgson
On October 31, 2011 03:01:19 PM Stephen Denne wrote:
 I'm wondering whether it's worth doing anyway, simply to check that it
 doesn't do something completely unexpected, which would presumably alert
 us to something we hadn't considered.
 

Testing is always worthwhile, if only to ensure that PostgreSQL will actually 
run with your configuration on the new machine (sufficient shared memory, IP 
addresses specified in postgresql.conf, etc). 

However, assuming the PostgreSQL binary packages you're using are identical, 
and assuming that you aren't changing tablespace pointers around, the rsync / 
restart is pretty fool-proof in terms of reliably copying PostgreSQL itself. 
PostgreSQL is good about updating time stamps on modified files, you don't have 
to worry about needing the full compare options on rsync or anything  -avr --
delete is generally sufficient .

You might disable WAL archiving during a test startup to avoid sending 
duplicates to your backup server.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server hitting 100% CPU usage, system comes to a crawl.

2011-10-27 Thread Alan Hodgson
On October 27, 2011 01:09:51 PM Brian Fehrle wrote:
 We've restarted the postgresql cluster, so the issue is not happening at
 this moment. but running a vmstat 10 had my 'cs' average at 3K and 'in'
 averaging around 9.5K.

Random thought, is there any chance the server is physically overheating? I've 
seen CPUs throttle really low when overheating, which can make otherwise 
normal activity seem really slow.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are file system level differential/incremental backups possible?

2011-10-12 Thread Alan Hodgson
On October 12, 2011 03:04:30 PM Bob Hatfield wrote:
  Anyway, a better way for you would be to do a regular backup (with
  pg_start_backup, copy and pg_stop_backup) and then use wal
  archive_command to keep the xlogs between 2 full backups.
 
 Thanks Julien.  Can pg_start/stop_backup() be used for regular full
 file system backups?   All of the documentation I've read only refers
 to using those for warm standby/wal shipping methods.

The base backup necessary to initialize a warm standby server is a full file 
system backup of the database, which can also be used for restores to any 
point in time after the base backup is completed, assuming you also have all 
the archived WAL files.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] : PostgreSQL Online Backup

2011-10-04 Thread Alan Hodgson
  rsync works fine. Why exactly can't the recovery find the backed up copy
  of 000105390076? Please post your archive_command settings,
  the contents of any script(s) called by that, and the recovery.conf file
  you're using that's having problems, as well as the complete process you
  followed to
  initiate recovery. I strongly suspect you're missing part of the process
  of actually saving the WAL files needed for recovery.

 The recovery is unable to find the WAL archive because, it was generated on
 26th September.
 
 Whereas the backup is as taken on Oct 2nd, 2011. We deleted all the files.
 
 I do not have that WAL archive copy.
 
 The problem area -
 
 I found that a pg_clog file dated 26th Sep, 2011 is not synced (its not
 256K).
 

I'm going to need the rest of what I asked for to offer any further suggestions 
- especially the full and exact steps you took to initiate recovery and the 
contents of recovery.conf. Also, please don't top-post.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] : PostgreSQL Online Backup

2011-10-03 Thread Alan Hodgson
On October 3, 2011 05:33:35 AM Venkat Balaji wrote:
 Did anyone observe this behavior ?? Please help !
 
 This is critical for us. I want to recommend not to use rsync (use cp or
 scp instead) for production backup.
 

rsync works fine. Why exactly can't the recovery find the backed up copy of 
000105390076? Please post your archive_command settings, the 
contents of any script(s) called by that, and the recovery.conf file you're 
using that's having problems, as well as the complete process you followed to 
initiate recovery. I strongly suspect you're missing part of the process of 
actually saving the WAL files needed for recovery.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] : PostgreSQL Online Backup

2011-09-26 Thread Alan Hodgson
On September 26, 2011 05:49:50 AM Venkat Balaji wrote:
 I tried restoring the backup, after taking the full backup.
 
 Below is what i see in the archive destination.
 
 Postgres was asking for 00010193006F and i tried to find the
 same and below is what i find...
 
 -rw--- 1 postgres postgres 3.3M Sep 26 02:06
 00010193006F.gz -rw--- 1 postgres postgres  219 Sep 26
 02:53
 00010193006F.00328508.backup.gz
 
 Why is PG (9.0) putting an extension for the WAL Archive file as
  backup.gz ??


The archive files are created by your archive_command, as specified in 
postgresql.conf. My guess would be that your archive command runs the files 
through gzip as part of archiving (which is fine).

However, the restore_command you specify in recovery.conf  must undo this 
compression. So instead of (for example) 'cp -f %f %p', it might instead 
need to look like 'zcat %f  %p'.

Hope this helps.

 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Searching through trigger functions

2011-09-26 Thread Alan Hodgson
On September 26, 2011 10:47:20 AM Bob Pawley wrote:
 Hi
 
 I have an error somewhere in one of the trigger functions in my database.
 The error message only told me it was a ‘Select * From (table name).
 
 Is there a global method of finding where this expression resides other
 than searching each script individually.


pg_dump -s  file; vi file; /s search_string

Alternately, I find it good practice to maintain function definitions outside 
of 
the database, under version control. You can then grep those files as needed.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication between 64/32bit systems?

2011-09-20 Thread Alan Hodgson
On September 20, 2011 01:26:06 PM Hannes Erven wrote:
 So I'd like to ask if there is anything I could do by e.g. changing
 compile-time options at the slave to get things going?
 

No.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] warm standby - apply wal archives

2011-09-05 Thread Alan Hodgson
On September 5, 2011, MirrorX mirr...@gmail.com wrote:
 thx a lot for your answer.
 
 actually DRBD is the solution i am trying to avoid, since i think the
 performance is degrading a lot (i ve used it in the past). and also i
 have serious doubts if the data is corrupted in case of the master's
 failure, if not all blocks have been replicated to they secondary. has
 anyone faced this situation? any comments on that? thx in advance
 

DRBD mode C is very good. If you're running mode C, when PostgreSQL issues 
an fsync, that doesn't return until the secondary node has the data on disk. 
It's as safe as you're going to get.

The performance limit for DRBD is the write speed of a single network 
interface. If you're exceeding that, though, you also aren't going to be 
shipping out WAL segments in real time. I guess also if your nodes aren't 
close by, the latency could be a speed killer, but that's not really the 
normal use case.


Re: [GENERAL] heavy swapping, not sure why

2011-08-29 Thread Alan Hodgson
On August 29, 2011 01:36:07 PM Lonni J Friedman wrote:
 I have several Linux-x68_64 based dedicated PostgreSQL servers where
 I'm experiencing significant swap usage growth over time.

It's the Linux kernel that does it, not PostgreSQL. Set vm.swappiness=0 
(usually in /etc/sysctl.conf) and put that into effect.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] heavy swapping, not sure why

2011-08-29 Thread Alan Hodgson
On August 29, 2011 02:34:26 PM you wrote:
 On Mon, Aug 29, 2011 at 3:46 PM, Alan Hodgson ahodg...@simkin.ca wrote:
  On August 29, 2011 01:36:07 PM Lonni J Friedman wrote:
  I have several Linux-x68_64 based dedicated PostgreSQL servers where
  I'm experiencing significant swap usage growth over time.
  
  It's the Linux kernel that does it, not PostgreSQL. Set vm.swappiness=0
  (usually in /etc/sysctl.conf) and put that into effect.
 
 that won't help and, in almost all cases, is a bad idea.

Overly aggressive swapping with the default settings has frequently caused me 
performance issues. Using this prevents those problems.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming Replication: Observations, Questions and Comments

2011-08-24 Thread Alan Hodgson
On August 24, 2011 08:33:17 AM Samba wrote:
 One strange thing I noticed is that the pg_xlogs on the master have
 outsized the actual data stored in the database by at least 3-4 times,
 which was quite surprising. I'm not sure if 'restore_command' has anything
 to do with it. I did not understand why transaction logs would need to be
 so many times larger than the actual size of the database, have I done
 something wrong somewhere?

If you archive them instead of keeping them in pg_xlog, you can gzip them. 
They compress reasonably well.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need linux uid in pg-psql

2011-08-19 Thread Alan Hodgson
On August 19, 2011 07:01:33 AM Gauthier, Dave wrote:
 Is there a way to get the linux idsid of a user, even for a remote network
 connection?
 
 I could write a pg-perlu to get this, but I suspect it won't give me the
 original user when there's a remote connect.
 
 Thanks for any suggestions !

There's an identd protocol for this. It's not commonly used anymore, and when 
present tends to deliberately obscure the results. 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Add Foreign Keys To Table

2011-07-07 Thread Alan Hodgson
On July 7, 2011 10:40:11 AM Rich Shepard wrote:
 alter table station_information add column sta_type varchar(50)
 unique not null references station_type(sta_type);
 NOTICE:  ALTER TABLE / ADD UNIQUE will create implicit index
 station_information_sta_type_key for table station_information
 ERROR:  there is no unique constraint matching given keys for referenced
 table station_type
Reading the alter table document page for 9.x does not show me what I'm
 doing incorrectly.

You need a unique index on station_type.sta_type


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Add Foreign Keys To Table

2011-07-07 Thread Alan Hodgson
On July 7, 2011 11:55:25 AM Rich Shepard wrote:
 On Thu, 7 Jul 2011, Alan Hodgson wrote:
  You need a unique index on station_type.sta_type
 
 Alan,
 
station_type(sta_type) is part of a composite primary key. Doesn't
 primary key automatically imply unique and not null?

It implies the composite is unique. Not sta_type.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Add Foreign Keys To Table

2011-07-07 Thread Alan Hodgson
On July 7, 2011 12:30:35 PM Rich Shepard wrote:
 On Thu, 7 Jul 2011, Alan Hodgson wrote:
  It implies the composite is unique. Not sta_type.
 
OK. Now I understand. How, then, do I add a unique constraint to each
 component of the composite key so I can add them as foreign keys to the
 station_information table? Or, is there another way to add those two
 columns to station_information and then add the foreign key constraints?
 
 Thanks for clarifying,
 

create unique index index_name on table (column).

Or I think you can create a foreign key on a composite like foreign key 
(column1,column2) references table (column1,column2) which probably makes 
more sense if that is a natural key.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Invalid byte sequence for encoding UTF8: 0xedbebf

2011-06-15 Thread Alan Hodgson
On June 15, 2011 01:18:27 PM BRUSSER Michael wrote:
 Unless there's no other options I don't want to use sed or break file into
 pieces, if possible,

iconv loads everything into RAM. You can use split, convert the pieces, and 
then recombine, I did that when converting a large database to utf-8 and it 
worked.

-- 
Obama has now fired more cruise missiles than all other Nobel Peace prize 
winners combined.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Write performance on a large database

2011-06-09 Thread Alan Hodgson
On June 9, 2011 05:15:26 AM Håvard Wahl Kongsgård wrote:
 Hi, I have performance issues on very large database(100GB). Reading from
 the database is no problem, but writing(or heavy writing) is a nightmare.
 I have tried tuning postgresql, but that does not seem to improving the
 writing performance.
 To improve the write performance, what are my options?

Buy fast disks (many), attach them to a good raid controller with a battery-
backed write cache, setup in RAID-10. Or move to SSD.

-- 
Obama has now fired more cruise missiles than all other Nobel Peace prize 
winners combined.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] replication problems 9.0

2011-06-07 Thread Alan Hodgson
On June 7, 2011 04:38:16 PM owen marinas wrote:
 Thx, Merci, Gracias Rodrigo
 it worked indeed, Im wondering why replication is not included in All
 

Probably because it gives access to all the data being written to the 
database.

-- 
Obama has now fired more cruise missiles than all other Nobel Peace prize 
winners combined.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Locale and UTF8 for template1 in 8.4.4

2011-05-06 Thread Alan Hodgson
On May 6, 2011, Iain Barnett iainsp...@gmail.com wrote:
 Would anyone be able to point out to me how I can get the template1
 database to be utf8 and en_GB? (or US, I'm not *that* fussed)

Use the --encoding and --locale options to initdb.


Re: [GENERAL] pgsql 9.0.1 table corruption

2011-04-14 Thread Alan Hodgson
On April 14, 2011 08:10:47 am Dan Biagini wrote:

 I suspect that it may have occurred during a filesystem level backup
 (ie pg_start_backup(), tar -czf..., pg_stop_backup()), as I performed
 a backup and moved the database to a different system.  After
 restoring the files and starting postgres I began getting these
 errors.  I have tried restoring multiple times with the same tar
 archive with the same results (on different systems).

Did you perform a PITR restore using that tar as a base backup? Do any errors 
occur?  The tar file alone will not be an intact backup, as I'm sure you're 
aware.

http://www.postgresql.org/docs/9.0/static/continuous-archiving.html#BACKUP-
PITR-RECOVERY

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] upgrade

2011-02-03 Thread Alan Hodgson
On February 2, 2011, William Bruton d...@spidr.com wrote:
 How do I know which version to upgrade to from 8.1.4?
 

Well, 8.1 is no longer supported, it seems. So an upgrade to any supported 
version will likely require application changes, or at least thorough 
testing. You might as well go right to 9.0.3 to get all the latest features.

-- 
A hybrid Escalade is missing the point much in the same way that having a 
diet soda with your extra large pepperoni pizza is missing the point.


Re: [GENERAL] HA solution

2011-01-14 Thread Alan Hodgson
On January 14, 2011, Jaiswal Dhaval Sudhirkumar jaiswa...@microland.com 
wrote:
 Hi,
 
 I am looking for active-active clustering solution.
 
 I have one SAN box and two separate NODES, where I need to create
 active-active cluster. My data directory would be one and mounted to the
 SAN box for both the nodes. (There will be one sharable data directory
 for both the nodes) So the query which will come to the load balancer
 (pgpool) it will route to the node which has a less load. However, it
 will use the same data directory. It is nothing but the RAC kind of
 structure.  Now, my question is.
 
 1)Is it possible above implementation in PostgreSQL?
 2)Has someone implemented cluster in their production environment?
 
 Please experts share your thought/comments/experience how I shall achieve
 that.
 

You cannot run multiple PostgreSQL instances against the same data 
directory.

-- 
A hybrid Escalade is missing the point much in the same way that having a 
diet soda with your extra large pepperoni pizza is missing the point.


Re: [GENERAL] Backup and restore sequences

2011-01-07 Thread Alan Hodgson
On January 7, 2011, gvim gvi...@gmail.com wrote:
 PostgreSQL 9.0.1/pgAdminIII 1.12.1
 
 I want to copy selected tables from one database to another and maintain
 the sequences which I originally setup with:
 
 CREATE SEQUENCE venues_id_seq START WITH 1122;
 ALTER TABLE venues ALTER COLUMN id SET DEFAULT nextval('venues_id_seq');
 
 ... along with their current values, which have been augmented since the
 database was setup. When I backup via pgAdminIII the sequences are not
 even included. I also can't find anything in:
 
 man pg_dump
 
 ... which specifies sequences.
 
 gvim

--table=table
  Dump only tables (or views or sequences) matching table. 
Multiple tables can  be  selected
  by  writing  multiple  -t  switches. Also, the table parameter 
is interpreted as a pattern
  according to the same rules used by psql’s \d commands (see 
Patterns [psql(1)]), so multi-
  ple  tables can also be selected by writing wildcard 
characters in the pattern. When using
  wildcards, be careful to quote the pattern if needed to 
prevent the shell  from  expanding
  the wildcards.




-- 
A hybrid Escalade is missing the point much in the same way that having a 
diet soda with your extra large pepperoni pizza is missing the point.


Re: [GENERAL] Restore problem

2010-12-28 Thread Alan Hodgson
On December 28, 2010, Adrian Klaver adrian.kla...@gmail.com wrote:
 On 12/28/2010 07:40 PM, Bob Pawley wrote:
  Open the file in Wordpad and see if it looks better.
  
  I downloaded an sql editor and it looks the same in it as well.
  
  At least the editor will make it easier to fix the problem. However I
  would like to know what happened so I can avoid it in the future.
  

It's often a good idea to maintain function definitions outside the database, 
under version control, and apply them to the database from there.

Also, try a unix2dos utility on the text of the functions before giving up 
and hand editing them.

-- 
A hybrid Escalade is missing the point much in the same way that having a 
diet soda with your extra large pepperoni pizza is missing the point.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restore problem

2010-12-28 Thread Alan Hodgson
On December 28, 2010, Bob Pawley rjpaw...@shaw.ca wrote:
 It's often a good idea to maintain function definitions outside the
 database,
 under version control, and apply them to the database from there.
 
 I would appreciate a more detailed explanation of this.

Treat them like source code.

 
 Bob
 
 Also, try a unix2dos utility on the text of the functions before giving
 up and hand editing them.
 
 I'll look at that - I'm also looking at something called Vim
 http://www.vim.org/download.php

vim is an excellent open source text editor. Which may fix your problem if 
it's related to line endings.

-- 
A hybrid Escalade is missing the point much in the same way that having a 
diet soda with your extra large pepperoni pizza is missing the point.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.0 replication -- multiple hot_standby servers

2010-10-29 Thread Alan Hodgson
On October 29, 2010, Dean Gibson (DB Administrator) 
postgre...@ultimeth.com wrote:
 My question is, how do I configure the other three (still) hot_standby
 boxes to now use the new primary?  Clearly I can change the
 recovery.conf file on each standby box, but that seems like an
 unnecessary nuisance.

I'm curious about this too. It seems that currently I'd have to rebuild any 
additional slaves basically from scratch to use the new master.

-- 
A hybrid Escalade is missing the point much in the same way that having a 
diet soda with your extra large pepperoni pizza is missing the point.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.0 replication -- multiple hot_standby servers

2010-10-29 Thread Alan Hodgson
On October 29, 2010, Dean Gibson (DB Administrator) 
postgre...@ultimeth.com wrote:
 On 2010-10-29 11:17, Alan Hodgson wrote:
  I'm curious about this too. It seems that currently I'd have to
  rebuild any additional slaves basically from scratch to use the new
  master.
 
 I think so long as you pointed (via primary_conninfo) the additional
 slaves to the new (pending) master, before you touched the pending
 master's trigger file, you should be OK, as all the DBs should be in
 sync at that point.

Yeah they're in sync data-wise, but do they think they're the same WAL 
stream for continuity? Would be nice.


-- 
A hybrid Escalade is missing the point much in the same way that having a 
diet soda with your extra large pepperoni pizza is missing the point.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why Select Count(*) from table - took over 20 minutes?

2010-10-26 Thread Alan Hodgson
On October 26, 2010 10:18:41 am Ozz Nixon wrote:
 I have hung off indexes for each column, to resolve my previous
 performance issue from 3+ weeks ago. However, COUNT() is still dog slow
 - this table is a write once, read many... *never* update, nor delete.
 
 Any suggestions?

If you need to do count(*) on 60 million row tables, you will probably need 
faster hardware.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restore/dump from /usr/local/pgsql/data directory

2010-09-23 Thread Alan Hodgson
On September 23, 2010 01:49:50 pm kongs...@stud.ntnu.no wrote:
 Hi,
 I have a copy of /usr/local/pgsql/data from old server. Is it
 possible to do a dump of the sql databases in this directory, so that
 I can easily migrate them to my current system?
 

You should be able to launch a postmaster against it to do so, yes. It will 
need to be the same (major) version as the old server was running.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   >