Re: [GENERAL] Wrap around id failure and after effects

2013-11-26 Thread Richard Huxton

On 26/11/13 07:15, Arun P.L wrote:

Hi all,

We had a wraparound failure in the db and most of the tables and data
were missing. So we have done a full vacuum in db and after that the
tables reappeared but now the problem is, all the tables have duplicate
when listing tables with /dt. And also after the vacuum we recievied the
following warning.

*INFO:  free space map: 48 relations, 29977 pages stored; 134880 total
pages needed*
*DETAIL:  Allocated FSM size: 1000 relations + 2 pages = 215 kB
shared memory.*
*WARNING:  some databases have not been vacuumed in over 2 billion
transactions*
*DETAIL:  You may have already suffered transaction-wraparound data loss.*
*
*

Is this an error happened between the vacuum?  If so what can be done
next to prevent data loss? The vacuum was not done as superuser, we are
doing a second time vacuum as superuser now. And what are the further
steps to be followed now like reindexing,etc?



1. Did you take a full file-level backup of things before vacuuming?

2. What version?

3. How far back in the logs do the warnings go (you should have been 
receiving warnings for a long time)?


4. How/why had you disabled/altered the autovacuum daemon?

This shouldn't really be possible without disabling autovaccuum or 
configuring it strangely.


http://www.postgresql.org/docs/current/static/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Failed to autoconvert '1' to text.

2013-09-06 Thread Richard Huxton

On 06/09/13 09:13, Szymon Guz wrote:

Hi,
why isn't 'aa' always treated as string?



with x as (
   select
   '1' a,
   '2' b
)
SELECT levenshtein(a, b), length(a)
FROM x;

ERROR:  failed to find conversion function from unknown to text



Why should I cast '1' to '1'::TEXT to satisfy a function (TEXT, TEXT)?


I think it's to do with the CTE. Presumably its types get fixed 
separately from the SELECT levenshtein() call. A quoted literal is type 
unknown until it has a context. It could be a date, point, hstore etc.


If you use the literals directly the context lets PostgreSQL figure it out.
   SELECT levenshtein('1','2');

--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why doesn't COPY support the HEADER options for tab-separated output?

2013-08-15 Thread Richard Huxton

On 12/08/13 23:18, Bruce Momjian wrote:

On Mon, Aug 12, 2013 at 03:17:00PM -0700, Jeff Janes wrote:

On Mon, Aug 12, 2013 at 2:21 PM, Bruce Momjian br...@momjian.us wrote:

On Mon, Aug 12, 2013 at 02:15:25PM -0700, Joe Van Dyk wrote:

Mostly just curious, as this is preventing me from using tab-separated output.
I'd like there to be a header in my files. I have to use CSVs instead.


Late to the discussion, but it does work to set format=csv and delimiter 
= E'\t' to get tab-separated. Be nice not to have to though.


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres DB crashing

2013-06-20 Thread Richard Huxton

On 18/06/13 18:31, bhanu udaya wrote:

Hello,
Greetings.

My PostgresSQL (9.2) is crashing after certain load tests. Currently,
postgressql is crashing when simulatenously 800 to 1000 threads are run
on a 10 million records schema. Not sure, if we have to tweak some more
parameters of postgres. Currently, the postgressql is configured as
below on a 7GB Ram on an Intel Xeon CPU E5507 2.27 GZ. Is this postgres
limitation to support only 800 threads or any other configuration
required. Please look at the log as below with errors. Please reply


max_connections 5000
shared_buffers  2024 MB
synchronous_commit  off
wal_buffers 100 MB
wal_writer_delays   1000ms
checkpoint_segments 512
checkpoint_timeout  5 min
checkpoint_completion_target0.5
checkpoint_warning  30s
work_memory 1G
effective_cache_size5 GB


Just to point out, your memory settings are set to allow *at least*

 shared-buffers 2GB + (5000 * 1GB) = 5TB+

You don't have that much memory. You probably don't have that much disk. 
This is never going to work.


As has been said, there's no way you can do useful work simultaneously 
with 1000 threads if you only have 4 cores - use a connection pooler. 
You'll also need to reduce work_mem to 1MB or so.


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Synchronous Replication in production

2013-06-06 Thread Richard Huxton

On 06/06/13 11:20, Colin Sloss wrote:


I have been testing the differences between asynchronous and synchronous
hot standby streaming replication on PostgreSQL 9.2.4. There is some
push towards synchronous replication, but I am finding some serious
problems, and wonder how other people deal with them.

[snip]

The whole idea of my solution was to have no single point of failure.
This seems to create two exclusive points of failure, each needing a
completely separate reaction.


Synchronous replication provides a higher level of guarantee for an 
individual transaction (it's safely[1] on at least two boxes now) at the 
cost of making the system as a whole more brittle.


Your uptime as a service will inevitably be reduced since in the event 
of problems talking to the slave the master will *have* to delay/cancel 
new transactions.


I have seen people suggest some sort of mode where the server drops back 
to asynch mode in the event of problems. I can't quite understand the 
use-case for that though - either you want synchronous replication or 
you don't. Mostly-synchronous is just asynchronous.


Here's a few questions. How you answer them will decide whether you 
really want synchronous replication or not:

1. The link between servers encounters network congestion
  a. The whole system should slow down.
 Committed transactions should ALWAYS be on
 two geographically separate machines.
  b. An alert should be sent.
 If it's not sorted in 5 mins we'll get someone to look at it.
2. Adding more servers[2] to my replication should:
  a. Make the system as a whole slower[3] and reduce uptime
 but increase the safety of committed transactions
  b. Make the system as a whole faster and increase uptime

There are cases where you want (a), but lots where you want (b) and 
monitor the replication lag.



[1] For various values of safely of course
[2] In the same mode - adding async slaves doesn't count
[3] Assuming a reasonable write load of course. Read-only databases 
won't care.


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Synchronous Replication in production

2013-06-06 Thread Richard Huxton

On 06/06/13 12:48, Colin S wrote:

Thanks for your answer. I find it very interesting that you say that
synchronous setups should always be in two geographically separate
locations. In this case they are on the same subnet. Adding the lag of
committing to two, geographically separate, databases is not feasible
for this OLTP application.


Well, if they're in the same building(s) then your transactions are all 
at the same risk from fire/earthquake/godzilla etc. Might/might not be 
important to you.



I also like your point that mostly synchronous is just asynchronous.
So, responding by switching to asynchronous as a response to slow-down
is asynchronous anyway.


Mostly synchronous is like a bit pregnant.


Any other comments, or examples, of when synchronous is worth
implementing would be greatly appreciated.


Note that PostgreSQL's synchronous replication just guarantees that the 
commit has reached the transaction log on the slave. That doesn't mean 
the slave has replayed the transaction log and a query against the slave 
will show the transaction's results. So - it doesn't in itself guarantee 
that you can see issue read-only queries against either server 
indiscriminately.


However, if you really, really need to know that a committed transaction 
is on two physically separate sets of disks then synchronous is what you 
want. If both sets of disks are in the same building then you might be 
able to achieve the same result by other (cheaper/simpler?) means.


If you have a business e.g. selling books or train tickets or some such 
then you might decide it's better to have a simpler more robust setup 
from the point of view of providing continuous service to end-customers. 
In the (hopefully rare) event of a crash irreparably losing some 
transactions apologise to your customers and recompense them generously.


For a system handling multi-million pound inter-bank transfers you might 
decide it's better to have the system not working at all rather than 
have an increased risk of a lost transaction.


Of course in both cases you might well want a separate list/cache of 
pending/recently-committed transactions to check against in the event of 
a failure.


I believe what you should do from an engineering approach is to treat it 
in a similar way to security. What do you want to protect against? Make 
a list of possible failures and what they mean to the business/project 
and then decide how much time/money to spend protecting against each one.


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Checking for changes in other tables

2013-04-26 Thread Richard Huxton

On 26/04/13 10:01, CR Lender wrote:

I can add a trigger on eu_loans to check if Diane and Betty both live in
the EU. The problem is how to prevent one of them from moving to a
non-EU country (if they do, the loan has to be cancelled first). They
are however allowed to move to other EU countries.

At the moment, this is checked by the application, but not enforced by
the database. I could add more triggers to the persons table (and
another one on countries), but that doesn't feel right... countries
and persons are base data and shouldn't need to know about other
tables using their records.


I think this is more a problem of terminology rather than your current 
triggers. Triggers aren't really part of a table, but they are 
observing it, so it's a sensible place to list them when viewing a 
table-definition in psql. There's no reason the trigger function is even 
in the same schema as the targetted table.


How would it feel if the syntax was more like the following?

CREATE TRIGGER ... OBSERVING UPDATES ON persons ...

or even

PUBLISH UPDATE,INSERT,DELETE ON persons AS person_changes;
SUBSCRIBE TO person_changes CALLING PROCEDURE ...;

A different feel, but no difference in behaviour.

--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table containing only valid table names

2013-04-26 Thread Richard Huxton

On 26/04/13 16:09, Michael Graham wrote:

I'm pretty sure I can't do what I need as postgres doesn't support
triggers on DDL but maybe I'm wrong.


If you're still in development and not live, it'll be worth checking out 9.3

http://www.postgresql.org/docs/devel/static/event-triggers.html


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] apt.postgresql.org broken dependency?

2013-04-26 Thread Richard Huxton

On 25/04/13 18:01, Martín Marqués wrote:

Just tried upgrading and added the apt-postgresql.org repo to my
Debian server (on testing now) and I got some backages like barman
retained because some dependencies couldn't be satisfied.

Los siguientes paquetes tienen dependencias incumplidas:
  barman : Depende: python (  2.7) pero 2.7.3-4 va a ser instalado
   Depende: python-argcomplete pero no va a instalarse



Since when 2.7.3 isn't larger then 2.7.


Is that not complaining that it *wants* a version of python  2.7 and 
you have larger?


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL service terminated by query

2013-03-26 Thread Richard Huxton

On 26/03/13 05:55, adrian.kitching...@dse.vic.gov.au wrote:

I'm hoping I can get some info on a query which terminates my PostgreSQL
service.
The query is a relatively simple PostGIS query:



The log text when the service crashes is:



2013-03-26 15:49:55 EST LOG: server process (PID 3536) was terminated by
exception 0xC005
2013-03-26 15:49:55 EST HINT: See C include file ntstatus.h for a
description of the hexadecimal value.



I'm running PostgreSQL 9.1 with PostGIS 2.0 installed on an WinXP SP3:
4GB RAM machine. Shared_buffers set at 50MB. Let me know if further info
needed.


This is a Windows memory-related error. It might be due to a library 
problem, bad RAM or a corrupted pointer in the database table itself.


1. Can you reliably produce the error with this specific gid?
2. Can you dump the database (or perhaps just the tables in question)?

If we can't find any problems in the database itself and you can spare 
the downtime, it may be worth running a RAM checker overnight.



Notice:
This email and any attachments may contain information that is personal,
confidential,
legally privileged and/or copyright. No part of it should be reproduced,
adapted or communicated without the prior written consent of the
copyright owner.


Oh no, too late!

--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Do after update trigger block the current transaction?

2013-03-26 Thread Richard Huxton

On 26/03/13 13:24, Clemens Eisserer wrote:

Hi Richard,


  Will triggers (after
update specifically) cause the execution of SQL-commands to pause
until the trigger-function has returned (at statement execution time
or commit)?


The trigger will block. If it didn't then it couldn't abort the transaction
if it needed to.


Thanks for the clarification.


Why not use one of the established trigger-based replication solutions?


Because the other database which I would like to keep in sync is a
MySQL db. Furthermore I do not need a 1:1 replica, but instead just
update a few columns in different tables there.

My inital plan was to add a timestamp-column which is updated at every
Update and to poll for changes every 5-10s. However, the word
polling seems to cause an allergic reaction for some poeple ;)


Might be worth looking at PgQ - a queueing system underlying Londiste. 
That would handle tracking the changes in PostgreSQL leaving you to just 
handle the MySQL end. Timestamps will do the job as long as you are 
careful to allow enough slack to deal with clock updates.



--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running update in chunks?

2013-01-25 Thread Richard Huxton

On 25/01/13 08:57, Tim Uckun wrote:

What if you do:
alter table cars.imports set (fillfactor=50);
Before the vacuum full, and then try the update again?


This makes a dramatic difference when combined with a vacuum.

UPDATE 98834
Time: 3408.210 ms

Ten times faster!
That suggests (to me, at least) that it is related to index updating. 
Again, your GIN index seems primary candidate.


A fillfactor of 50% means row updates probably stay on the same 
disk-block as their previous version. This implies less index updates.


Try running iostat (I think that's available on a Mac) with/without the 
fillfactor and with/without the GIN index while you do the updates. It's 
possible your SSD is just behaving oddly under stress.


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running update in chunks?

2013-01-25 Thread Richard Huxton

On 25/01/13 11:38, Tim Uckun wrote:

That suggests (to me, at least) that it is related to index updating. Again,
your GIN index seems primary candidate.

Try running iostat (I think that's available on a Mac) with/without the
fillfactor and with/without the GIN index while you do the updates. It's
possible your SSD is just behaving oddly under stress.



I dropped the index and the numbers shot up tenfold or more.  I don't
know why postgres feels the need to update the GIN index on the hstore
field when I am only updating an integer field but it looks like I
need to split the hstore into a different table.
If the row moves to a different block, then it has no choice. The old 
index entry will point to an invalid block. There are some optimisations 
(HOT - http://pgsql.tapoueh.org/site/html/misc/hot.html) but that relies 
on (iirc) the update staying on the same block and also not updating any 
indexed fields (and you were, I think).


A GIN index is very expensive to update compared to btree too.
--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running update in chunks?

2013-01-21 Thread Richard Huxton

On 21/01/13 08:04, Tim Uckun wrote:

This is the query I am running

update cars.imports i
 set make_id = md.make_id
 from cars.models md where i.model_id = md.id;


Here is the analyse

Looks like it's the actual update that's taking all the time.

This query takes fifty seconds on a macbook air with i7 processor and
eight gigs of RAM and SSD hard drive.  I am using postgres 9.2
installed with homebrew using the standard conf file.
Can you try a couple of things just to check timings. Probably worth 
EXPLAIN ANALYSE.


SELECT count(*) FROM cars.imports i JOIN cars.models md ON i.model_id = 
md.id;


CREATE TEMP TABLE tt AS SELECT i.* FROM cars.imports i JOIN cars.models 
md ON i.model_id = md.id;


Now the first one should take half a second judging by your previous 
explain. If the second one takes 50 seconds too then that's just the 
limit of your SSD's write. If it's much faster then something else is 
happening.



--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running update in chunks?

2013-01-21 Thread Richard Huxton

On 21/01/13 10:30, Tim Uckun wrote:

Can you try a couple of things just to check timings. Probably worth EXPLAIN
ANALYSE.

SELECT count(*) FROM cars.imports i JOIN cars.models md ON i.model_id =
md.id;


Takes about 300 ms


CREATE TEMP TABLE tt AS SELECT i.* FROM cars.imports i JOIN cars.models md
ON i.model_id = md.id;

Takes about 300 ms
OK - so writing all the data takes very under one second but updating 
the same amount takes 50 seconds.


The only differences I can think of are WAL logging (transaction log) 
and index updates (the temp table has no indexes).


1. Try CREATE TABLE rather than CREATE TEMP TABLE - if that's still 
quick then it's not the time taken to write WAL.
2. Run the update query against your new tt table and see how long that 
takes.
3. Add indexes and repeat (in particular I'd be suspicious of the gin 
index on data)


My guess is that it's the time taken to update the data index - gin 
indexes can be slow to rebuild (although 50 seconds seems *very* slow). 
If so there are a few options:
1. Split the table and put whatever this data is into an import_data 
table - assuming it doesn't change often.
2. Try a fill-factor of 50% or less - keeping the updates on the same 
data page as the original might help
3. Drop the gin index before doing your bulk update and rebuild it at 
the end. This is a common approach with bulk-loading / updates.


Oh - I'm assuming you're only updating those rows whose id has changed - 
that seemed to be the suggestion in your first message. If not, simply 
adding AND make_id  md.make_id should help. Also (and you may well 
have considered this) - for a normalised setup you'd just have the 
model-id in imports and look up the make-id through the models table.


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running update in chunks?

2013-01-21 Thread Richard Huxton

On 21/01/13 20:09, Tim Uckun wrote:

Just to close this up and give some guidance to future googlers...

Careful, future googlers.

Conclusion.  Updates on postgres are slow

Nope.


(given the default
postgresql.conf).  I presume this is due to MVCC or the WAL or
something and there are probably some things I can do to tweak the
conf file to make them go faster but out of the box running an update
on a table with lots of rows is going to cost you a lot.
Unlikely. Do you really think that a PostgreSQL installation typically 
runs 100 times slower on updates than inserts and every other user has 
just said oh, that's ok then? Or is it more likely that something 
peculiar is broken on your setup.



  Removing the indexes doesn't help that much.

Suggestion for the PG team.  Deliver a more realistic postgres.conf by
default. The default one seems to be aimed at ten year old PCs with
very little RAM and disk space. At least deliver additional conf files
for small, medium, large, huge setups.

--
  Richard Huxton


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup/Restore bytea data

2013-01-14 Thread Richard Huxton

On 14/01/13 01:13, sub3 wrote:

Hi,
I am having an issue upgrading a really old 8.2 db up to 9.2.2. One of the
tables contains a bytea field.  When I backup  restore using pgadmin from
my 9.2.2 install, it doesn't convert this field correctly.


Could this be due to your bytea_output setting?
  http://www.postgresql.org/docs/9.2/static/datatype-binary.html

Not sure how this could snag you if you are dumping using 9.2, but this:


I see it starts w/special character when selecting it from the old database;
in the new db, I see a string starting w/\211PNG.


is clearly in escape rather than hex format.

--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres PHP error

2012-12-03 Thread Richard Huxton

On 03/12/12 05:18, rahul143 wrote:

Hi All

Im receiving the following error, on a php page, conneting to postgres 7.4,
installed on Mandrake 10.0



Others have answered your question. However...

Please make sure you have regular scheduled backups for that database. 
That is quite an old (9 years) version of PostgreSQL and you'll be 
unlikely to find many people with a similar version who can help you 
with problems in the event of a crash.


It's probably a good idea to see if you can install the latest version 
from source on that machine and use it's version of pg_dump to dump the 
database regularly too. I'd expect to have to do a little work to move 
the data into an up-to-date version of PostgreSQL and it's always better 
to know what issues you'll have before doing it for real.


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Noticed something odd with pgbench

2012-11-16 Thread Richard Huxton

On 16/11/12 19:35, Shaun Thomas wrote:

Hey guys,

So, we have a pretty beefy system that runs dual X5675's with 72GB of 
RAM. After our recent upgrade to 9.1, things have been... odd. I 
managed to track it down to one setting:


shared_buffers = 8GB



It does the same thing at 6GB. 4GB is safe for hours on end, but 6GB 
and 8GB implode within in minutes. During this, kswapd goes crazy 
paging out the cache, at the same time it's reading from disk to put 
them back in. It's like I fed the kernel poison or something.


Has anybody else noticed something like this? I got this behavior with 
9.1.6 on a 3.2 kernel. No amount of tweaks in /proc/sys/vm changed 
anything either, so I'm not convinced it's a NUMA problem.



Does this match what you're seeing?

http://frosty-postgres.blogspot.co.uk/2012/08/postgresql-numa-and-zone-reclaim-mode.html


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recover from failed files

2012-11-05 Thread Richard Huxton

On 04/11/12 23:28, Nyamul Hassan wrote:
Upon inspecting the hard drive, we were able to salvage the data 
folder, but when we try to load it into the PGSQL version of the data 
folder (8.2), the server would not load.  Upon closer inspection, we 
suspect the culprit to be the file 0015 in pg_clog which is 214kb 
instead of 256kb size of all other files in the same folder.


Is there any way that we can salvage the data from this data folder?


1. Make sure you have a complete backup of all of the data dir (pg_xlog, 
clog - everything) before making any changes.
This one is vital. Don't skip it. Any fixes you attempt may end up 
making things worse.


2. What is the exact error-message you get when trying to start up 
PostgreSQL?
When starting the server, try doing it directly in single-user mode 
until everything is up and running

http://www.postgresql.org/docs/8.2/static/app-postgres.html

3. Have a quick read of the blog-post below as a start point explaining 
the various files.
It's a useful introduction and can give you some keywords to search 
against.

http://it.toolbox.com/blogs/database-soup/pg_log-pg_xlog-and-pg_clog-45611


4. You may end up needing the resetxlog tool, but don't just blindly run it:
http://www.postgresql.org/docs/8.2/static/app-pgresetxlog.html

5. Version 8.2 is end-of-life since 2011.
Once your database is working again, dump the data and upgrade as 
soon as possible.
Until you upgrade, make sure you have the last release of 8.2.23 
running.


6. Obviously, make sure backups are scheduled regularly.

It may well be that you can get things up and running by just padding 
pg_log/0015 to 256KB with zeroes. That doesn't mean your database will 
be 100% consistent though - transaction information will have been lost.


But before you do anything drastic, do steps #1 and #2.

--
  Richard Huxton


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] deadlock detected

2012-11-05 Thread Richard Huxton

On 05/11/12 18:39, AI Rumman wrote:

Hi all,

I am using Postrgesql 9.1

I got a message in my log:
ERROR:  deadlock detected
DETAIL:  Process 20265 waits for ShareLock on transaction 27774015; 
blocked by process 20262.
Process 20262 waits for ShareLock on transaction 27774018; 
blocked by process 20265.
Process 20265: UPDATE t1 SET product_id = 1017966 WHERE 
ticketid = '2170501'

Process 20262: UPDATE c1 SET deleted=1 WHERE id='2170501'
HINT:  See server log for query details.
STATEMENT:  UPDATE t1 SET product_id = 1017966 WHERE ticketid = '2170501'

How may I get more information about this deadlock like which queries 
created it.


The error message shows which queries - your two UPDATEs. I'm guessing 
either t1 or c1 are views and so refer to the same row with id 2710501.


--
  Richard Huxton



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] table logging

2012-10-29 Thread Richard Huxton

On 28/10/12 19:25, Jeff Janes wrote:

I am looking for some very simple table logging.  I am not trying to
do auditing in a hostile environment, just simple logging.

I found two candidates, tablelog from pgfoundry, and
http://wiki.postgresql.org/wiki/Audit_trigger_91plus

The first has the advantage of being simple, but hasn't been
maintained in 5 years which is a little worrisome.


I've got tablelog in use on one of my projects. I had to make one small 
fix when I upgraded the db to 9.1 - something to do with quote escaping. 
Can't remember the details I'm afraid.


Other than that, it seems to work fine.


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Notiffy problem

2012-06-29 Thread Richard Huxton

On 29/06/12 09:01, adasko98 wrote:

Hi
In first sorry for my english :) I have got a problem with notify/listener.
I do a function which returns a trigger. Everything is ok but when i want
send in a second parameter a variable NOTIFY say: syntax error



 Notify demoApp, 'some text';



n_user :='sda';
 Notify demoApp, n_user ;here is a problem


Looks like a limitation of the plpgsql parser, perhaps even counts as a 
bug. You can work around it with EXECUTE though, something like:

  cmd := 'NOTIFY demoApp, ' || quote_literal(n_user);
  EXECUTE cmd;
or just
  EXECUTE 'NOTIFY demoApp, ' || quote_literal(n_user);

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] maybe incorrect regexp_replace behavior in v8.3.4 ?

2012-05-16 Thread Richard Huxton

On 16/05/12 14:54, Gauthier, Dave wrote:

bi_hsx_a0_latest=# select regexp_replace('xxx','^xxxy$',null);
regexp_replace


(1 row)
But why did it return null in this case?  I would think no match would leave it 
'xxx'.


If a function is defined as strict then any null parameters 
automatically result in a null result.


And indeed, this:
  SELECT * FROM pg_proc WHERE proname LIKE 'regexp_r%';
shows pro_isstrict is set to true, as it is for most other function.s

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with reading data from standby server ?

2012-04-20 Thread Richard Huxton

On 20/04/12 09:39, Condor wrote:

Hello,

when I read binary replication tutorial
(http://wiki.postgresql.org/wiki/Binary_Replication_Tutorial) I see on
Hot Standby: Hot Standby is identical to Warm Standby, except that the
Standby is available to run read-only queries.
I setup hot standby server described in tutorial and it's working fine,
no problem with that.



 I have a problem when I try to start a script that

should read whole table, error message from php is:



PHP Warning: pg_query(): Query failed: ERROR: canceling statement due to
conflict with recovery



When data is fetched it's saved into a file after some modifications.
This script is work a 30-40 min until all data is parsed. Well, I think
problem is started when master server send new wal file to slave, but
how I can resolve that problem ?


Your master database is being updated all the time and your slave is 
supposed to be a perfect copy, including deleted/updated rows being no 
longer visible. So - when you run a query it might need to do one of two 
things:

  1. Pause replication
  2. Cancel the query

At some point PostgreSQL switches from doing #1 to doing #2 (otherwise 
you could get so far behind the replica could never catch up). You can 
control how long before it switches:


http://www.postgresql.org/docs/9.1/static/hot-standby.html#HOT-STANDBY-CONFLICT


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] default value returned from sql stmt

2012-03-30 Thread Richard Huxton

On 29/03/12 23:28, Pavel Stehule wrote:

select anum from t1 where anum = 4
union all select 100 limit 1;


I'm not sure the ordering here is guaranteed by the standard though, is 
it? You could end up with the 4 being discarded.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] default value returned from sql stmt

2012-03-30 Thread Richard Huxton

On 30/03/12 08:46, Pavel Stehule wrote:

2012/3/30 Richard Huxtond...@archonet.com:

On 29/03/12 23:28, Pavel Stehule wrote:


select anum from t1 where anum = 4
union all select 100 limit 1;



I'm not sure the ordering here is guaranteed by the standard though, is it?
You could end up with the 4 being discarded.


A order is random for only UNION, UNION ALL should to respect
order.  But I didn't check it in standard.


Let's put it this way - a quick bit of googling can't find anything that 
says the order *is* guaranteed, and (almost?) no other operations do so 
by default.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Desperately need a magical PG monitoring tool

2012-03-26 Thread Richard Huxton

On 26/03/12 19:58, Andreas wrote:

Hi,

is there a tool for monitoring PG servers?



How do you watch that all runs well ?


There are a number of tools. You might want to google around:
- nagios
- monit
- munin
There are plenty of others

Nagios is aimed at multi-server service monitoring (and alerting). So 
you can keep track of 20 websites on 5 different servers etc.


Monit is more focused on monitoring/alerting/restarting on a single server.

Munin is about performance tracking and graphing. You can set it up to 
alert if parameters get outside a set range.



For your scenario, I'd consider restoring the backup to another database 
(on another server perhaps) and checking some suitable value (e.g. a max 
timestamp in a frequently updated table). You could do all this from a 
simple cron-job + perl script but you might want to consider one of the 
tools mentioned above.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Indexing MS/Open Office and PDF documents

2012-03-15 Thread Richard Huxton

On 15/03/12 21:12, Jeff Davis wrote:

On Fri, 2012-03-16 at 01:57 +0530, alexander.bager...@cognizant.com



We have
hard time identifying MS/Open Office and PDF parsers to index stored
documents and make them available for text searching.



The first step is to find a library that can parse such documents, or
convert them to a format that can be parsed.


I've used docx2txt and pdf2txt and friends to produce text files that I 
then index during the import process. An external script runs the whole 
process. All I cared about was extracting raw text though, this does 
nothing to identify headings etc.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 100% cpu usage on some postmaster processes kill the complete database

2012-03-01 Thread Richard Huxton

On 01/03/12 16:41, Paul Dunkler wrote:

Hi List,

we are currently running a rather large postgresql-installation with 
approximately 4k Transactions and 50k index scans per second.

In the last days on some times of the day (irregular - 3-4 times a day), some 
of the postmaster processes are running with 100% cpu usage. That leads to a 
totally breakdown of the query execution. We see tons of statements which are 
correctly automatically aborted by our statement_timeout set to 15 seconds. I 
tried to search, but do not really recognize what the problem could be there...

Some things i have checked:
- We are not running any bulk jobs or maintenance scripts at this time
- No system errors in any logs during that slowdowns
- I/O Performance seems fine. No high IO Wait amount... But IO Write totally 
drops in that times because it seems that no postgres process can perform any 
update

I just installed a script, which prints me out the top and ps axf information 
for facing out the problem. I will post a snippet of the top here:


Combine that with this:
  SELECT * FROM pg_stat_activity;

That will let you line up pids from top with active queries.

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 100% cpu usage on some postmaster processes kill the complete database

2012-03-01 Thread Richard Huxton

On 01/03/12 19:41, Paul Dunkler wrote:

I did that now - and analyzed the situation a bit. There are only queries
running which will process very fast under high load (only index scans, very low
rates of sequential scans). I found a remarkable number of Insert statements...

And sometimes when that happens, the CPU Utilization is going up to nearby 100%
too and 98% is system usage...


You're running on a box larger than I'm used to, so this is only 
speculation. I'm wondering whether you're hitting problems with lock 
contention or some such. It looks like you've got 48 cores there all at 
about 100% possibly none of them getting much chance to do any work.


Oddly, the totals you posted in your top output show 6.3% user cpu 
usage, which I can't make match with 50-odd processes all approaching 
100% cpu.


Perhaps have a look at vmstat output too - see if context-switches spike 
unusually high during these periods (sorry - no idea what an unusually 
high number would be on a machine like yours).


Reducing the number of concurrent backends might help, but that rather 
depends on whether my guess is right.


If no-one more experienced than me comes along shortly, try reposting to 
the performance list. There are people there who are used to machines of 
this size.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problemas com client_encoding ?

2012-02-24 Thread Richard Huxton

On 24/02/12 02:34, Emanuel Araújo wrote:

[user@local ~]$ psql
psql: invalid connection option client_encoding



1o. Server
SO - Centos 5.7 Final
PostgreSQL 9.1.1


Apologies - my Spanish is non-existent (that's assuming your email 
wasn't in Portugese or some such).


http://archives.postgresql.org/pgsql-admin/2011-09/msg00088.php
http://archives.postgresql.org/pgsql-admin/2011-09/msg00101.php

Do you have version 9.0 installed too?

--

  Richard Huxton
  Archonet Ltd



Re: [GENERAL] problem trying to create a temp table

2012-02-24 Thread Richard Huxton

On 24/02/12 13:26, mgo...@isstrucksoftware.net wrote:


ALL,
Using 9.1.2 on Windows 7 X64 for development.
I'm trying to create a temporary table used to store session variables
 CREATE TEMP TABLE iss.sessionsettings



When I try and run this I get the following error message.
ERROR: cannot create temporary relation in non-temporary schema

Temp tables get their own schema, and each session (connection) gets
its own temp schema. So - don't qualify them by schema.

--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problem trying to create a temp table

2012-02-24 Thread Richard Huxton

On 24/02/12 13:36, mgo...@isstrucksoftware.net wrote:

How do I access it. I just did that and when I try and access it with a
ERROR: relation sessionsetting does not exist
LINE 1: select * from sessionsetting


= CREATE SCHEMA foo;
CREATE SCHEMA
= CREATE TABLE foo.table1 (id int);
CREATE TABLE
= SET search_path = foo;
SET
= INSERT INTO table1 VALUES (1),(2),(3);
INSERT 0 3
= CREATE TEMP TABLE table1 (id int);
CREATE TABLE
= INSERT INTO table1 VALUES (4),(5),(6);
INSERT 0 3
= SELECT * FROM table1;
 id

  4
  5
  6
(3 rows)
= DROP TABLE table1;
DROP TABLE
= SELECT * FROM table1;
 id

  1
  2
  3
(3 rows)

Try SELECT * FROM pg_namespace to see the various temp schemas being 
created.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problem trying to create a temp table

2012-02-24 Thread Richard Huxton

On 24/02/12 13:37, Andrew Gould wrote:

On Fri, Feb 24, 2012 at 7:32 AM, Richard Huxtond...@archonet.com  wrote:


Temp tables get their own schema, and each session (connection) gets
its own temp schema. So - don't qualify them by schema.


Is that to avoid naming conflicts between simultaneous users?


Yes. I believe it also invisibly adds it to your search_path too, the 
same as it does with the pg_catalog schema.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql as main database

2012-02-23 Thread Richard Huxton

Hello all,
 Good day,
I would like to make  Postgresql 8.4 as main database for running 
three(software) and possible exchanging data.Is it possible? if yes 
what is the implication in terms of performance?


It's certainly possible. Obviously it will need a bigger machine than 
you would use for just one of the databases.


Bear in mind that you can't have cross-database queries without using 
something like the dblink package. I don't know if that affects your 
exchanging data.


Oh - and unless you really have no choice in the matter, use 9.1 rather 
than 8.4 - you will get better performance, new features and it will be 
supported for longer.


--
  Richard Huxton
  Archonet Ltd



Re: [GENERAL] function return update count

2012-01-06 Thread Richard Huxton

On 06/01/12 16:33, David Johnston wrote:

In 9.1 you could use and updatable CTE and in the main query perform
and return a count.  I would think plpgsql would be the better option
though.


For the SQL option, it would be this (9.1 only though - I think David's 
right there).


CREATE FUNCTION f1() RETURNS int AS $$
WITH rows AS (
UPDATE t1 SET ... WHERE ... RETURNING 1
)
SELECT count(*)::int FROM rows
$$ LANGUAGE SQL;


Unfortunately you can't do UPDATE ... RETURNING count(*) directly so you 
need to go through this business with the CTE (WITH clause).


Oh - the cast to int is because count() returns bigint.

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] sql statements using access frontend

2011-12-02 Thread Richard Huxton

On 02/12/11 14:02, Gavin Mitchell wrote:

But if the data is changed within the table itself or a form based on
the table or query it starts a transaction and adds all available fields
into the where clause

2011-12-01 10:03:52 GMT LOG: statement: BEGIN;UPDATE
ua.pole_survey_tbl SET completedby=E'test' WHERE pole_survey_id
= 6478 AND survey_id = 1025 and (etc…….)

I chopped the log entry short but there are AND entries fo all fields in
the table


It's an implementation of optimistic locking. Imagine we are both 
updating the same row at the same time. If my update gets committed 
before yours, then yours won't find a row to update. Otherwise my 
changes could be overwritten without you knowing.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Huge number of INSERTs

2011-11-18 Thread Richard Huxton

On 18/11/11 12:30, Phoenix Kiula wrote:

I've currently disabled any INSERT functions on my website...but even
with disabled INSERTs and only SELECTs alive, I still see the psql:
FATAL:  sorry, too many clients already message.


As Tomas has said, this is nothing to do with inserts and everything to 
do with too many clients. Take the time to read through his reply. Fix 
the number of clients before worrying about other details.



Btw, I don't see any PG logs. What could be the problem? The config
says that it should store it in the directory pg_log, but this
directory is empty.


You'll need to check the manuals for full details on how to configure 
your logging - I'd expect a zero-length file even if you weren't logging 
anything to it. Might be worth checking the directory is owned by user 
postgres (or whoever your server runs as).


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] function within a function/rollbacks/exception handling

2011-11-07 Thread Richard Huxton

On 07/11/11 19:18, Lori Corbani wrote:


I have a function, call it 'functionMain'.  And I have several tables that each 
have trigger functions.  Each trigger function needs to call 'functionMain' 
(with different parameters).

table A =  trigger function A ==  functionMain
table B =  trigger function B ==  functionMain
table C =  trigger function C ==  functionMain

'functionMain' returns VOID (runs an insert statement). and has an 
exception/raise exception block.

An insert transaction for table A is launched (insertA), trigger function A is 
called,
'functionMain' is called and 'functionMain' fails.  Hence, trigger function A 
needs to rollback.

Questions:

a) I am assuming that the trigger functions should use 'PERFORM 
functionMain()'?


If you don't want the result, yes.


b) if 'functionMain' fails, then 'funtionMain' automatically performs an 
implicit rollback, correct?

c) if 'functionMain' fails, should the trigger function also contain an 
exception handler
or will the rollback from 'functionMain' cascade up to the original 
transaction (insertA)?


Unless you catch the exception, it will roll back the whole transaction, 
so yes to b + c. If it helps to visualise what happens, exceptions are 
actually implemented using savepoints in plpgsql.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] function doesn't see change in search_path

2011-11-07 Thread Richard Huxton

On 07/11/11 14:43, Ivan Sergio Borgonovo wrote:


create or replace function test_insert() returns void as

[snip]

$$ language plpgsql volatile;

set search_path to 'test1', 'public';



set search_path to 'test2', 'public';

[snip unexpected behaviour]


I now try to add a SET search_path to the bottom of all my plpgsql 
functions. It can get very confusing otherwise, as you've just demonstrated.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Analytic type functionality, matching patters in a column then increment an integer

2011-10-05 Thread Richard Huxton

On 05/10/11 19:29, Henry Drexler wrote:


and would like to have a column indicate like this:

'evaluation' 'indicator'
tf 1
tt 1
ft  1
ff
ff
tf  2
ft  2
tf  3
tt  3
ft  3
ff


SELECT id,evaluation,sum(case when evaluation='tf' then 1 else 0 end) 
over (order by id) FROM tfcount ORDER BY id;


id | evaluation | sum
++-
  1 | tf |   1
  2 | tt |   1
  3 | ft |   1
  4 | ff |   1
  5 | ff |   1
  6 | tf |   2
  7 | ft |   2
  8 | tf |   3
  9 | tt |   3
 10 | ft |   3
 11 | ff |   3
(11 rows)

OK, so that's almost it, but you'd like ff to be null. You probably 
can do it with a suitably nested CASE, but it's probably clearer as a 
sub-query.


SELECT
  id,
  evaluation,
  CASE WHEN evaluation='ff' THEN null::int
  ELSE sum::int END AS section_num
FROM (
  SELECT
id,
evaluation,
sum(case when evaluation='tf' then 1 else 0 end) over (order by id)
  FROM tfcount
) AS rows
ORDER BY id;

HTH

P.S. - I always find the windowing function syntax confusing, but it's 
as the standards define I believe.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stored procs

2011-09-29 Thread Richard Huxton

On 29/09/11 02:33, J.V. wrote:

Is is possible within a stored procedure to read all the tables in a
schema into a list?

[snip]

I need to extract this meta-data for a project.


Apart from information_schema mentioned elsewhere, start psql with -E 
and then try \dt and similar - it will show you the queries it uses.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Create Extension search path

2011-09-29 Thread Richard Huxton

On 28/09/11 18:51, Roger Niederland wrote:

To add the extension required me to change the search_path.
Is it required that all any schema added to the search_path exist in all
databases?


If set in the configuration file, yes (unless you want errors).

You can set it per-database or per-user though. See ALTER DATABASE or 
ALTER ROLE.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] : Checksum ERROR when restoring Online Backup

2011-09-23 Thread Richard Huxton

On 23/09/11 13:53, Venkat Balaji wrote:

Linux *prod-server* 2.6.18-194.17.1.el5 #1 SMP Mon Sep 20 07:12:06 EDT
2010 x86_64 x86_64 x86_64 GNU/Linux

The server version where I restored the production backup is as follows -
Linux *backup-server* 2.6.18-194.3.1.el5PAE #1 SMP Sun May 2 04:42:25
EDT 2010 i686 i686 i386 GNU/Linux

I read some where that, Postgres datafiles are not architecture
independent.


They (the WAL files) are not, and it looks like you're trying to restore 
a 64-bit version onto a 32-bit server. That's not going to work. A 
pg_dump/restore works of course, and if you need replication then Slony 
can handle this.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Remote connection shows localhost databases

2011-09-16 Thread Richard Huxton

On 16/09/11 09:01, Guillaume Lelarge wrote:

On Fri, 2011-09-16 at 08:14 +0100, Richard Huxton wrote:

Odd that pgAdmin
doesn't give an error though.



Probably because the OP entered the Windows networking path in the Name
field, and didn't change the Host field. In which case, pgAdmin most
likely try to connect locally.


Ah - I can see how that would make sense.

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Remote connection shows localhost databases

2011-09-16 Thread Richard Huxton

On 15/09/11 22:40, Guillaume Lelarge wrote:

On Thu, 2011-09-15 at 15:30 -0400, Jeff Adams wrote:


When I try to connect to the remote machine, I enter \\server\ip_address
into the host name field.


The host field should contain the
socket complete path, or the host name, or the ip address.


As Guillaume says - try just the IP address to start with. What you've 
been trying is sort-of a Windows networking path. Odd that pgAdmin 
doesn't give an error though.


If you were using psql you'd type something like:
  psql -h ip-addr -U username -d dbname

Once you're happy the ip-address is working, try just the server-name by 
itself. You'll want the internet name for the machine which in theory 
can be different from the Windows network name, but usually is the same.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] forcing table ownership

2011-09-16 Thread Richard Huxton

On 16/09/11 14:13, Darin Perusich wrote:


Altering the table owner by setting it to the group role effectively
denies permission to all users of the group. Unless they explicitly SET
role grp1 that is.


I've already got a user richardh

As a superuser:
  CREATE GROUP mygroup INHERIT;
  GRANT mygroup TO richardh;
  CREATE TABLE shared_table (i int);
As richardh:
  ALTER TABLE shared_table ADD COLUMN t text;
  ERROR:  must be owner of relation shared_table
As superuser:
  ALTER TABLE shared_table OWNER TO mygroup;
As richardh:
  ALTER TABLE shared_table ADD COLUMN t text;
  ALTER TABLE

I think the key bit you're missing is the INHERIT on the group. Also 
note that the CREATE USER/CREATE GROUP commands actually just run CREATE 
ROLE under the hood.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Remote connection shows localhost databases

2011-09-15 Thread Richard Huxton

On 15/09/11 17:30, Jeff Adams wrote:

I am trying to connect to a workstation running postgres from another
computer on the network.


How are you connecting? PgAdmin, psql, Perl script? What is the hostname 
you are using on the local machine and are you sure it is for the remote 
machine?


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql can't subtract

2011-03-25 Thread Richard Huxton

On 25/03/11 15:29, Rob Sargent wrote:

Running 9.0.3 (client and server)



select distinct
substring(

...

position( 'xmlns=' in substring(xml_text,1,300)) + length('xmlns='),
100))) -1

...

gives
ERROR: negative substring length not allowed.


If there is no match the initial position will be zero. Minus one will 
give you a negative substring length.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-02-24 Thread Richard Huxton

On 24/02/11 15:24, marcin mank wrote:

On Tue, Feb 22, 2011 at 9:44 PM, Aleksey Tsalolikhin
atsaloli.t...@gmail.com  wrote:


Most of our data is in a single table, which on the old server is 50 GB in
size and on the new server is 100 GB in size.



Maybe the table the on new server has fillfactor less than 100 ?


That would be my immediate guess. Someone changed the fillfactor on the 
table - that won't affect the existing data but will affect a restore.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Inserting data from one database to another using stored functions

2011-01-07 Thread Richard Huxton

On 07/01/11 01:56, Benjie Buluran wrote:

Hi pgSQL peeps!

I’m stumped on this question for over 3 days now.



PERFORM dblink_exec('SELECT sp_insert_detailtable('|| pActivityId ||',
'|| pserialnumber ||')');



I’m using the DEBUG function in pgAdmin, and I keep getting the
“*statement returning results not allowed*” error in /PERFORM
dblink_exec('SELECT sp_insert_detailtable('|| pActivityId ||', '||
pserialnumber ||')');/ in this line.


A quick look at the docs for dblink_exec say dblink_exec executes a 
command (that is, any SQL statement that doesn't return rows). A SELECT 
statement returns rows. Zero rows are still rows. What happens if you 
just use dblink(...)?


http://www.postgresql.org/docs/9.0/static/contrib-dblink-exec.html
http://www.postgresql.org/docs/9.0/static/contrib-dblink.html

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pl/perl and perl version-tip in doc

2011-01-06 Thread Richard Huxton

On 06/01/11 18:07, pasman pasmański wrote:

It is need tip in doc which version of perl must be installed. Error
message tells nothing. For example Postgres 8.4 works only with perl
5.10.


Are you sure that's the case? Could it be that you're using a 
pre-compiled version of plperl?


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Which variant to choose?

2010-10-30 Thread Richard Huxton

On 30/10/10 07:56, RP Khare wrote:

  I want to use PostgreSQL for my standalone .NET Windows application. I
downloaded PostgreSQL Advanced Server from EnterpriseDB's website and it
is very smooth. The GUI also very good. But I suspect how along
EnterpriseDB will support it.

 The second option is to download from postgresql.org
 http://www.postgresql.org/.

There aren't any substantial differences (that I'm aware of) in the free 
download from EnterpriseDB vs the community release. It includes a lot 
of work on the installer package.



I want an edition that lasts long and I can easily upgrade it to future
versions without any problems and it must work smoothly with .NET. It
must also integrate seemlessly with third-party reporting tools.


The core postgresql project typically supports releases for about 5 
years. Upgrades within a major release (e.g. 9.0.1 to 9.0.4) are simple. 
Upgrades between major releases (e.g. 9.0.x to 9.1.x) can be done via a 
dump/restore or (for the latest releases) the pg_upgrade utility (don't 
know if it's ported to Windows).


As regards .net and reporting tools, I can't comment. We have drivers 
for .net, odbc and jdbc but whether they are seamless enough only you 
can decide.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrade from 8.3.3

2010-10-21 Thread Richard Huxton

On 21/10/10 00:04, Rich Shepard wrote:

While I'm re-learning how to properly start postgres after a reboot,
I'd like recommendations on upgrading my current 8.3.3 to a newer
version. Since I'm the only one currently using the system (but I'll
be using CMS Made Simple for my revised web site and that works only
with the 8.x releases), I wonder if I should upgrade to 8.4.5 or 9.0.
I suspect the former, but I'd appreciate thoughts from more
knowledgeable folks here.


I'd be surprised if there was anything in CMS Made Simple that would 
work with 8.4 and not 9.0. Have they said it doesn't or is it just not 
tested against it?


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Record Separator with psql -c when output to a variable not a file!

2010-10-04 Thread Richard Huxton

On 04/10/10 20:51, andrew harvey wrote:

The default psql -c record separator is a newline when output is the
screen, console or a file. But what if I'm required to send output to
a variable and not to a file (or standard output?)

command=`psql  -c SELECT * FROM pg_stat_database`



I know that, obviously, if you output the result of the sql query to a
file and then use grep and awk you could have wonderful output all the
time. But there is a specific requirement here to do all the
formatting from within a variable!


This probably won't work:
  echo $command | cut -f 2,4,5
This probably will:
  echo $command | cut -f 2,4,5

I hate shell - I always spend hours trying to get quoting to work properly.

Oh, for the archives Andrew is probably using a command like:
  psql -t --no-align --field-separator=$'\011'
That turns the header and footer off (tuples only) and sets the output 
to unaligned tab-separated columns.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index on points

2010-09-25 Thread Richard Huxton

On 23/09/10 11:45, A B wrote:

Hello.

If I have a table like this

create table fleet ( ship_id   integer,  location point);

and fill it with a lot of ships and their locations and then want to
create an index on this to speed up operations on finding ships within
a certain region (let's say its a rectangular region), how do I do
this?

I tried:

CREATE INDEX my_index  ON fleet USING gist ( box(location,location)); ?


That's the idea, but you'll need to be careful about how you're 
searching against it. Remember, the index is on a box based on the 
location, not the point location itself.


CREATE TABLE fleet (ship int, locn point);

INSERT INTO fleet SELECT (x*1000 + y), point(x,y)
FROM generate_series(0,999) x, generate_series(0,999) y;

CREATE INDEX fleet_locn_idx ON fleet USING gist( box(locn,locn) );
ANALYSE fleet;

EXPLAIN ANALYSE SELECT count(*) FROM fleet
WHERE box(locn,locn) @ box '(10,10),(20,20)';
QUERY PLAN 


---
 Aggregate  (cost=2654.84..2654.85 rows=1 width=0) (actual 
time=4.611..4.612 rows=1 loops=1)
   -  Bitmap Heap Scan on fleet  (cost=44.34..2652.33 rows=1000 
width=0) (actual time=4.344..4.491 rows=121 loops=1)

 Recheck Cond: (box(locn, locn) @ '(20,20),(10,10)'::box)
 -  Bitmap Index Scan on fleet_locn_idx  (cost=0.00..44.09 
rows=1000 width=0) (actual time=4.311..4.311 rows=121 loops=1)

   Index Cond: (box(locn, locn) @ '(20,20),(10,10)'::box)
 Total runtime: 4.694 ms
(6 rows)

DROP INDEX fleet_locn_idx;

EXPLAIN ANALYSE SELECT count(*) FROM fleet WHERE box(locn,locn) @ box 
'(10,10),(20,20)';
   QUERY PLAN 



 Aggregate  (cost=20885.50..20885.51 rows=1 width=0) (actual 
time=551.756..551.757 rows=1 loops=1)
   -  Seq Scan on fleet  (cost=0.00..20883.00 rows=1000 width=0) 
(actual time=5.142..551.624 rows=121 loops=1)

 Filter: (box(locn, locn) @ '(20,20),(10,10)'::box)
 Total runtime: 551.831 ms
(4 rows)

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] varchar lengths

2010-09-21 Thread Richard Huxton

On 21/09/10 10:40, Marcus Engene wrote:

Hi list,

In Oracle I can...
create table a
(
b varchar2(10 chars)
);
...and then, regardless of character encoding and how much space an
ascii character vs a ö takes, 10 characters will fit there.



Is there anything I've misunderstood? How does the rest of you deal with
this situation?


PostgreSQL actually measures length in characters anyway, so varchar(10) 
always holds 10 characters, whatever they are.


You'll need to have the appropriate database encoding for those 
characters of course.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] to_date conversion semantics?

2010-09-20 Thread Richard Huxton

On 20/09/10 20:18, Colin 't Hart wrote:

The 32nd of Undecember (!) turning into the 1st of February of the
next year... instead of throwing an exception like I expect.


What Tom said, but it's presumably using mktime(...) somewhere internally.

perl -MPOSIX  -e 'print scalar gmtime(mktime(0,0,0,32,13-1,73)),\n'
Fri Feb  1 00:00:00 1974

http://perldoc.perl.org/POSIX.html#mktime
http://linux.die.net/man/3/mktime

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Regular expression in an if-statement will not work

2010-09-09 Thread Richard Huxton

On 09/09/10 11:55, Ungermann Carsten wrote:

Dear postgres list,

I need to verify the value of a column by a regular expression in an
if-statement before insert or update. It should be a one to three digit
value.



'^[0-9]{1,3}$' -- don't work


Works here.

CREATE TEMP TABLE tt (t text);
INSERT INTO tt VALUES ('1'),('12'),('123'),('1234'),(' 123'),('123 ');
SELECT ':' || t || ':' AS target, t ~ '^[0-9]{1,3}$' FROM tt;
 target | ?column?
+--
 :1:| t
 :12:   | t
 :123:  | t
 :1234: | f
 : 123: | f
 :123 : | f
(6 rows)

Works in 8.2, 8.3, 8.4, 9.0 for me. Either you're not testing the values 
you think you are, or there is some issue with escaping of characters.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] On-disk size of db increased after restore

2010-09-01 Thread Richard Huxton

On 31/08/10 22:17, Devrim GÜNDÜZ wrote:

I have seen the opposite of this tons of times before, but I haven't
seen an increase after restore before. Does anyone know what may cause
this? Where should I look at?


Could you have changed the fillfactor on some big tables/indexes in the 
live database after populating them?


Is the locale the same on each machine/db?

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] On-disk size of db increased after restore

2010-09-01 Thread Richard Huxton

On 01/09/10 21:32, Devrim GÜNDÜZ wrote:

On Wed, 2010-09-01 at 21:13 +0100, Richard Huxton wrote:


Could you have changed the fillfactor on some big tables/indexes in
the  live database after populating them?


Nope. Even a pg_dump -h prod|psql backup_node resulted with the same
issue


Is the locale the same on each machine/db?


These are generic RPM installations, and locales are the same...


OK - so not fillfactor and not some unicode-related padding. I can't see 
how a 32 vs 64-bit architecture change could produce anything like a 
doubling of database size.


Is it that each file is doubled in size, or are some much larger while 
others are about the same? If the indexes are to blame it's presumably 
something to do with the order of row access during index creation.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Foreign keys and permissions oddity

2010-08-07 Thread Richard Huxton

On 07/08/10 01:13, Joshua Tolley wrote:

Is there some justification for this behavior that I should know already? It
seemed awfully strange when some folkds here stumbled on it:

[snip]

The key point seems to be that the owner of the referenced table has no
permissions on the table, although the referencing user does.


Presumably the underlying trigger functions are executing as the owner 
of the table. This would make sense in the (more common) case that you 
want to reference a table you don't necessarily have full read access 
for (e.g. member-id vs the whole row including address/phone).


You should be able to track the table's OID from pg_class through to 
tgrelid on pg_trigger and then tdfoid to the relevant OIDs in pg_proc. 
The functions are all named as RI_FKey_xxx.


Hmm - not sure if they execute as the table owner or the creator of the 
constraint. You could justify either, but of course they're frequently 
the same (as in your case).


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PQescapeStringConn

2010-07-30 Thread Richard Huxton

On 30/07/10 07:52, Scott Frankel wrote:

I have a number of very long strings that each contain many instances of
semi-colons, single quotes, forward and back slashes, etc. I'm looking
for an efficient and safe way to write them to my db using a prepared
statement.


What language? From C?


PREPARE fooprep (VARCHAR(32), text, text) AS
INSERT INTO foo (name, description, body) VALUES ($1, $2, $3);
EXECUTE fooprep('foo1', 'this is foo1',


This is basically PQprepare+PQexecPrepared, or PQexecParams if you want 
to do both in one step. There is no need to escape strings if they are 
passed as parameters - the library knows it's a string and handles that 
for you.


Where you *do* have to worry about escaping strings is if you are 
building up a query and have e.g. a varying table-name. It's legal for 
table names to contain spaces etc. but they need to be quoted correctly.


Every application language will have its own library, but they all have 
a similar prepare+exec option (and I think most use the C libpq 
interface underneath).


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL keepalives help

2010-07-30 Thread Richard Huxton

On 29/07/10 21:16, Kananda wrote:

I need the records are free for edit, in few minutes. Therefore I set
the KEEPALIVE configuration in my postgresql.conf:



tcp_keepalives_idle = 60 # TCP_KEEPIDLE, in seconds;
tcp_keepalives_interval = 60 # TCP_KEEPINTVL, in seconds;
tcp_keepalives_count TCP_KEEPCNT = 5 #;


If this is actually what you have in your file, you have an error in 
your count setting.


Try show tcp_keepalives_count; and so on - just check they're set as 
you expect them to be. If they have the right values, put wireshark or 
some other network monitor onto the connection and see if any packets 
are being sent back and fore while there is no real traffic.


Oh, and please don't cross-post to multiple lists (particular 
-cluster-hackers - can't see the relevance of that).


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PQescapeStringConn

2010-07-30 Thread Richard Huxton

On 30/07/10 16:57, Scott Frankel wrote:


On Jul 30, 2010, at 1:13 AM, Richard Huxton wrote:


On 30/07/10 07:52, Scott Frankel wrote:

I have a number of very long strings that each contain many instances of
semi-colons, single quotes, forward and back slashes, etc. I'm looking
for an efficient and safe way to write them to my db using a prepared
statement.


What language? From C?


Importing an SQL script. eg: \i my_script_of_prepared_statements.sql



The full statement (below) illustrates the problem I'm encountering.



INSERT INTO foo (name, body) VALUES ('foo', 'this will fail 'fer sher;'
on the characters inside the string');


Ah - the solution is: don't do that.

You're going to have to pre-process the strings in some way, or there 
will always be the chance of problems. Probably the best way to handle a 
bulk insert is through the COPY command:


BEGIN;

COPY foo (name, body) FROM stdin;
n1  b1
n2  b2
foo this will fail 'fer sher;' on the characters inside the string
\.

COMMIT;

By default COPY expects one line per row, with columns separated by tab 
characters. You can also have '/path/to/file/name' instead of stdin, but 
the file will need to be accessible from the backend process. If that's 
not the case (and it probably isn't) then you want to use psql's \copy 
variant which views the world from the client end of things.


COPY is faster than separate inserts and the only characters you need to 
worry about are tab, carriage-return and newline. These would be 
replaced by the sequences \t, \r, \n.


I don't know what format your strings are in initially, but a bit of 
perl/python/ruby can easily tidy them up.


Finally, more recent versions of PG have a COPY that supports CSV 
formatting too. See the manuals for more details on this.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Comparison of Oracle and PostgreSQL full text search

2010-07-29 Thread Richard Huxton

On 28/07/10 02:58, Howard Rogers wrote:

For what it's worth, I wrote up the performance comparison here:
http://diznix.com/dizwell/archives/153


Thanks very much Howard.

It might be my schoolboy-physics ability to fit a curve to two data 
points, but does anyone else think that the second and third graphs look 
like a sinusoidal variation overlaid on a steadily increasing baseline?



--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Full Text Search dictionary issues

2010-07-16 Thread Richard Huxton

On 16/07/10 05:22, Howard Rogers wrote:

OK, Tom: I did actually account for the number of rows difference
before I posted, though I accept I didn't show you that. So here goes:


Tom's good, but his mind-reading powers aren't what they used to be :-)


ims=# select count(*)
ims-# from search_rm
ims-# where to_tsvector('english', textsearch) @@
to_tsquery('english', 'woan  batt  ftxa')
ims-# limit 20;
  count
---
 0
(1 row)

Time: 0.593 ms
ims=# select count(*)
from search_rm
where to_tsvector('english', textsearch) @@ to_tsquery('english',
'woan  batt  ftx1')
limit 20;
  count
---
 0
(1 row)

Time: 489.362 ms

Both queries return zero rows. One takes an awful lot longer than the
other.


And how many matches do you get for each term? Is it equally slow if you 
search for 'wommman  batt  pzsdja' vs '... pzsdj1'?


I'm assuming pzsdja/1 aren't valid tokens of course...

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] error CDT FATAL: invalid frontend message type 69

2010-07-15 Thread Richard Huxton

On 15/07/10 07:04, ankit kamal wrote:

Hi All


I am using a  cluster setup  with two nodes in it. Replication between
two nodes is being done through slony.

Postgres version is 8.1.2 and slony version is 1.1.5 .

On Master node an error CDT FATAL: invalid frontend message type 69
encountered at 10:51 and postgres crashed.


Would CDT be your timezone (Central Daylight Time I think)? It's not 
part of any PG error message.


The other part is an error message that gets produced when the database 
receives a message (69 = E = Execute) for a protocol version later 
than the client has said it's using. In this case, probably PQexecParams 
or similar (see the libpq chapter of the manual).


Either some client code thought it was talking to an old version of 
PostgreSQL (that only supported protocol 2 rather than 3), there was a 
network error (and a message got garbled) or you've hit a bug.


It is a good idea to upgrade to the latest 8.1.x release of PostgreSQL 
and also perhaps a more recent slony - both have received bug-fixes 
since the versions you are running. Particularly upgrade PostgreSQL - 
you're missing almost 20 sets of bug-fixes.



There were no postgres logs logged from 10:51 to 11:11, and after that,
error CDT FATAL:  sorry, too many clients already was logged in
postgres logs upto postgres shutdown. The problem resolved after the
postgres was restarted on master node.

I could not find this error defined anywhere.


Well, too many clients just you have reached your maximum number of 
connections. It would have been useful to check what was connected, and 
what each connection was doing. Always worth seeing what ps reports 
the machine is doing and what the pg_stat_activity view says 
PostgreSQL is doing.



Any ideas of what it means, how to track the cause and cure?

Is there any way to reproduce this error?


Not without knowing more about how it happened, which we could only 
investigate before you restarted the master.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 'default nextval()' loses schema-qualification in dump ?

2010-07-07 Thread Richard Huxton

On 07/07/10 07:47, Arnaud Lesauvage wrote:

Le 6/07/2010 17:17, Tom Lane a écrit :

Arnaud Lesauvagearnaud.lis...@codata.eu writes:

As you have understood, I am not very savvy about postgresql's
internals, but from what you say my guess is that the problem is int the
psqlODBC is getting the default value of the sequence ?



[9.125]conn=095C4198, query='select n.nspname, c.relname, a.attname,
a.atttypid, t.typname, a.attnum, a.attlen, a.atttypmod, a.attnotnull,
c.relhasrules, c.relkind, c.oid, d.adsrc, case t.typtype when 'd' then
t.typbasetype else 0 end, t.typtypmod from (((pg_catalog.pg_class c
inner join pg_catalog.pg_namespace n on n.oid = c.relnamespace and
c.relname = E'mytable' and n.nspname = E'myschema') inner join
pg_catalog.pg_attribute a on (not a.attisdropped) and a.attnum  0 and
a.attrelid = c.oid) inner join pg_catalog.pg_type t on t.oid =
a.atttypid) left outer join pg_attrdef d on a.atthasdef and d.adrelid =
a.attrelid and d.adnum = a.attnum order by n.nspname, c.relname, attnum'


This is psqlODBC getting the sequence name (if you run this query it's 
the adsrc column). If I remember correctly, that's supposed to be the 
human-readable version of an expression and preserved *as entered by the 
user* (or pg_restore in your case).


If you start psql with the -E option and do \d myschema.mytable you'll 
be able to see how it gets the sequence-name. About half-way down the 
list of queries it runs you'll see a reference to pg_get_expr(...) - 
that turns an internal representation into a useful usable one.


I don't know why psqlODBC isn't using that. The function has been around 
for a while. Hmm - it's present back in 7.4 although it's not used in \d 
- that does reference adsrc directly.


Just grabbed the source download for the latest version and it still 
looks like it's using adsrc (I just searched for that and pg_get_expr). 
There should probably be a change in info.c around line 2091 to add a 
check for a recent version of PG (8+) and use pg_get_expr. Check on the 
odbc mailing-list - there may be an updated version available for you to 
test.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Identical command-line command will not work with \i metacommand and filename

2010-04-22 Thread Richard Huxton

On 22/04/10 08:24, John Gage wrote:

Yeesh.  What the ding-dong is this?

JohnGage:EFNWebsite johngage$ od -a CopySql.sql
000 ? ? ? s e l e c t sp * sp f r o m
020 sp m e s h _ d e s c r i p t o r
040 s ; nl nl

What are the ?'s. Mon Dieu, what is going on?


http://en.wikipedia.org/wiki/Byte_order_mark

Tends to get added if you go through a Windows system. Useless for utf-8 
afaik. Confuse the hell out of you because various tools parse and hide 
them then you pipe the file to a script and everything falls over.


Bunch of scripts available here to remove them:
  http://www.xs4all.nl/~mechiel/projects/bomstrip/

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PSQL segmentation fault after setting host

2010-04-22 Thread Richard Huxton

On 22/04/10 20:06, Morgan Taschuk wrote:

Program received signal SIGSEGV, Segmentation fault.
0xb7bbb4eb in X509_VERIFY_PARAM_inherit () from
/lib/i686/cmov/libcrypto.so.0.9.8
(gdb) bt
#0 0xb7bbb4eb in X509_VERIFY_PARAM_inherit () from
/lib/i686/cmov/libcrypto.so.0.9.8
#1 0xb7f9b61a in ssl_verify_cert_chain () from
/lib/i686/cmov/libssl.so.0.9.8


See if disabling ssl fixes it (see your pg_hba.conf and look for 
hostssl lines).



/opt/PostgreSQL/psqlODBC/lib/libpq.so.5


I'd guess the root cause is here though. You seem to be picking up some 
libraries from psqlODBC rather than your main package. I'm betting it's 
got some incompatible changes. Uninstall psqlodbc for a minute and see 
if that solves your problem.



libssl.so.4 = /opt/PostgreSQL/psqlODBC/lib/libssl.so.4 (0x009d9000)
libcrypto.so.4 = /opt/PostgreSQL/psqlODBC/lib/libcrypto.so.4 (0x00531000)
libkrb5.so.3 = /opt/PostgreSQL/psqlODBC/lib/libkrb5.so.3 (0x00a2c000)
libgssapi_krb5.so.2 = /opt/PostgreSQL/psqlODBC/lib/libgssapi_krb5.so.2
libk5crypto.so.3 = /opt/PostgreSQL/psqlODBC/lib/libk5crypto.so.3



--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tuple storage overhead

2010-04-16 Thread Richard Huxton

On 16/04/10 10:41, Peter Bex wrote:

Hi all,

I have a table with three columns: one integer and two doubles.
There are two indexes defined (one on the integer and one on one
of the doubles).  This table stores 70 records, which take up
30 Mb according to pg_relation_size(), and the total relation size
is 66 Mb.

[snip]

Is there a way to reduce the per-tuple storage overhead?


Short answer - no.

The database has to track visibility of every row - when it was 
inserted, deleted etc to support the MVCC concurrency system.


http://www.postgresql.org/docs/8.4/static/storage-page-layout.html

That means 24 bytes of overhead (on most systems) for each row. That's 
higher than some other RDBMS but they'll all have some overhead.



The reason I'm asking is that I have tons of tables like this,
and some data sets are much bigger than this.  In a relatively
simple testcase I'm importing data from text files which are
5.7 Gb in total, and this causes the db size to grow to 34Gb.


Anything from double to ten times the size isn't unexpected, depending 
on row-sizes and how many indexes you are talking about.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Text search

2010-03-16 Thread Richard Huxton

On 16/03/10 10:29, Chris Roffler wrote:

I have a text column in a table. We store XML in this column. Now I want to
search for tags and values



select * from  where to_tsvector('english',xml_column) @@
to_tsquery('nameCiti
Bank/name')

This works fine but it also works for any tag as long as the name 'Citi
Bank' is present.

How do I have to setup my search in order for this to work so I get an exact
match for the tag and value ?


Would you not be better off with an XML data-type if that's how you want 
to treat the data?

http://www.postgresql.org/docs/8.4/static/datatype-xml.html
http://www.postgresql.org/docs/8.4/static/functions-xml.html
http://www.postgresql.org/docs/8.4/static/xml2.html

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] reuse data in existing data directory

2010-03-16 Thread Richard Huxton

On 16/03/10 12:02, Peter Schmidtke wrote:

Dear PostGreSQL users,

I have the following problem. I have a system with several partitions
(openSuse Linux 11.1). Far what is important related to this question :

- 1 system partition (where postgres is installed)
- 1 data partition (where the postgres data directory is lying)

my system partition got messed up and I prefer to reformat and reinstall
the system without touching to the data partition. Thus I have to reinstall
postgresql, but I don't know if I would be able to recover my data. Is that
possible? How can I reintegrate all my ancient databases?


If the data partition contains *all* of your PostgreSQL data files 
(pg_clog, pg_xlog etc) then it should be simple enough.


1. Take a backup of your PG data (can't hurt).
2. Re-install SuSe, make sure you have the same version of PostgreSQL 
(if you had 8.3 before re-install that 8.3 again).
3. If your PG files are in the standard directory then you should see an 
error saying initdb refused to run.

4. That's it - it should all just work.

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Text search

2010-03-16 Thread Richard Huxton

On 16/03/10 12:36, Chris Roffler wrote:

Richard

I tried all that and you can see it on this thread, there are some
limitations on indexs on xpath work

http://archives.postgresql.org/pgsql-general/2010-03/msg00270.php
http://archives.postgresql.org/pgsql-general/2010-03/msg00270.php


OK - I've read through your other thread and I think you either:
1. Don't want to use a relational database for this (use an XML database 
instead)

2. Want to make the data at least slightly relational.

You're trying to index multiple fragments of a value. In your case it 
seems to be name tags within an XML fragment containing a list of 
attributes. That's exactly equivalent to trying to get an index search 
for '%ABC%' on text.


There's nothing you can do without exposing the structure of your value 
to the database.


You could write a custom parser for tsearch so it picked out only the 
relevant pieces of XML. That's probably more work than you want though.


You could run an xslt transform over the xml fragments and extract what 
you want and then use tsearch to index that, I suppose. Similarly, you 
might be able to do the same via xslt and xquery.


Finally, and to my mind most sensibly, if you want to search attributes, 
then store attributes. Parse out your XML and have an attributes table 
(id, name, value, last_changed, changed_by). That's not brilliant 
because every value will just be text, but at least each attribute is 
its own row.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UPDATE with JOIN not using index

2010-03-16 Thread Richard Huxton

On 16/03/10 13:05, Arnaud Lesauvage wrote:

Hi all !

PostgreSQL 8.4 here.
I have a simple update query that looks like this :

UPDATE t1
SET col = t2.col
FROM t2
WHERE t1.key1 = t2.key1 AND t1.key2 = t2.key2;

There is an index on (key1,key2) on the joined table (t2).
This query does not use the index.


What does it do, then? The output of EXPLAIN would be a start if EXPLAIN 
ANALYSE is too expensive.


Oh - and how many rows will this actually update?

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UPDATE with JOIN not using index

2010-03-16 Thread Richard Huxton

On 16/03/10 13:57, Arnaud Lesauvage wrote:

First query :

Merge Join (cost=699826.38..704333.80 rows=13548 width=836)
 Merge Cond: (((c.rue)::text = (r.rue)::text) AND ((c.codesite)::text =
(r.codesite)::text))
 - Sort (cost=696320.21..697701.07 rows=552343 width=823)
 Sort Key: c.rue, c.codesite
 - Seq Scan on cellules c (cost=0.00..443520.43 rows=552343 width=823)
 - Sort (cost=3504.88..3596.96 rows=36833 width=43)
 Sort Key: r.rue, r.codesite
 - Seq Scan on rues r (cost=0.00..711.33 rows=36833 width=43)


Second query :

Seq Scan on cellules c (cost=0.00..5018080.39 rows=552343 width=823)
 SubPlan 1
 - Index Scan using idx_rues_ruecodesite on rues r (cost=0.00..8.28
rows=1 width=13)
 Index Cond: (((rue)::text = ($1)::text) AND ((codesite)::text =
($0)::text))


OK - we have a merge join in the first case where it joins the 
pre-sorted output of both tables.


In the second case it queries the index once for each row in cellules.

Now look at the costs. The first one is around 704,000 and the second 
one is 5,000,000 - about 6 times as much. That's why it's not using the 
index, because it thinks it will be more expensive.


If it's not really more expensive that suggests your configuration 
values aren't very close to reality.


The first query should run faster if it has more work_mem available too. 
At the moment, it's probably going back and fore doing an on-disk sort.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Text search

2010-03-16 Thread Richard Huxton

On 16/03/10 13:49, Richard Huxton wrote:

You could run an xslt transform over the xml fragments and extract what
you want and then use tsearch to index that, I suppose. Similarly, you
might be able to do the same via xslt and xquery.


Actually, if it's only attribute names you're interested in you could do 
it with xpath


Something like (untested):

ALTER TABLE time_series ADD attr_names text;

UPDATE time_series SET attr_names = array_to_string(
xpath('*/Attribute/Name/text()', external_attributes)
,' '
);

CREATE INDEX fti_attr_names ON time_series USING gin(
  to_tsvector('simple', attr_names)
);

SELECT * FROM time_series WHERE
  to_tsvector('simple', attr_names)
  @@
  to_tsquery('simple', 'attribute22');

I'd probably just store the tsvector rather than text unless the text is 
of some use in itself.


If you plan to do anything with the attributes it'd still be better to 
split them out into their own table though.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pg 8.4.3 does not start up with Permissions should be u=rwx (0700)

2010-03-15 Thread Richard Huxton

On 15/03/10 20:08, Joseph S wrote:


This change was not documented in the release notes, and I don't think
permission != 700 should be a fatal error. I purposely set o+x to let
any user be able to read the log files.


What Magnus said but also - put your log files somewhere else. That way 
you can have a more permissive system for them.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FW: 8.4.2 One Click Installer

2010-03-15 Thread Richard Huxton

On 15/03/10 20:40, Arnold, Sandra wrote:

I need to get a copy of the 8.4.2 Linux 64 bit One Click Installer
instead of 8.4.3.  The plan is to install the 8.4.2 and then test our
Patching procedures when upgrading to 8.4.3.


Hmm - didn't even know we *had* a one-click installer on Linux. I think 
most people either use their package managers or compile from source.


I'm sure one of the nice people at Enterprise DB (it's their package) 
will be along with a proper link shortly. Until then, it looks like you 
could probably figure out the URL by playing with the number for 
fileid if you have 10 mins spare.


This is 8.4.3, Linux 64-bit
http://www.enterprisedb.com/getfile.jsp?fileid=878
This is 8.4.1 Windows
http://www.enterprisedb.com/getfile.jsp?fileid=855

I'd guess it's somewhere between the two.

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FSM and VM file

2010-03-08 Thread Richard Huxton

On 05/03/10 15:14, akp geek wrote:

Hi All -

 I have turned on the auto vacuum on the slave and for some
reason the db is getting bloated up. The master size is only 1G and the
slave is at 9.2GB now. I did cluster on couple of tables also. did any one
run into this situation? Can you please help?


OK - so autovacuum is now running on the slave, but your database keeps 
growing anyway.


So - either:

1. You have a long-running transaction blocking the vacuum.
2. Your autovacuum isn't running often enough.

The first step with #1 is to check for idle transactions. Look in the 
system view: pg_stat_activity

  SELECT * FROM pg_stat_activity;
Is there anything there that started a long time ago (probably marked 
IDLE IN TRANSACTION)?



For #2, I would start with seeing what tables are affected, then check 
your configuration settings. It might be that autovacuum_naptime is too 
large for your workload, or (auto)vacuum_cost_delay is too high.


http://www.postgresql.org/docs/8.4/static/routine-vacuuming.html#AUTOVACUUM

http://developer.postgresql.org/pgdocs/postgres/runtime-config-autovacuum.html#RUNTIME-CONFIG-AUTOVACUUM


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Foreign key behavior different in a function and outside

2010-03-08 Thread Richard Huxton

On 05/03/10 18:12, Mridula Mahadevan wrote:

Richard, To answer your questions, I have a live application that is
running on postgresql. We are seeing this issue on certain
installations and not on others. So the code is no different in each
set up. I also added the trigger to table B and then the foreign key
error is thrown in all set ups. But without the trigger a delete on
table B from within a function assumes cascade delete even when one
is not specified. Again only in some cases, I can send you the entire
procedure if it helps (the one I have below only has the relevant
parts).


But does the test code you sent show this problem on:
1. all installations
2. some installations
3. none of the installations
4. Don't know - haven't tried the test code

If the test code shows the problem then we know it's something basic in 
your PostgreSQL installations. If it doesn't then it's something in the 
setup of the databases.


I don't think the problem has anything to do with the code of the 
function. You have checked that the code in your functions makes sense 
and looked at it on servers where it works and it doesn't. If the 
problem was there I'm sure you'd have seen it.



--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Failed to run initdb: 128

2010-03-08 Thread Richard Huxton

On 05/03/10 20:05, Niranjan Maturi (nmaturi) wrote:

Hi

Thanks for the detailed explanation with the locales. I am trying to get
approvals to use a later version in 8.2. A colleague also suggested that
I install 8.3 version and uninstall it to clean up the machine. Then I
can install 8.2. I will try this as well.

I am attaching the zipped version of the whole log file.


OK - everything in there from [16:02:11:419] onwards (line 11630+) is 
just uninstalling.


The last few actions appear to be:
Action 16:01:37: DoCreateUser. Creating user account...
Action 16:01:38: SetPermissions. Setting filesystem permissions...

This one appears to fail:
Action 16:01:39: RunInitdb. Initializing database cluster (this may take 
a minute or two)...
MSI (s) (44:24) [16:01:39:449]: Executing op: 
CustomActionSchedule(Action=RunInitdb,ActionType=3073,Source=BinaryData,target=runini...@4,CustomActionData=1033;e:\pgsql\;e:\pgsql\data\;e:\pgsql\share\;1502;;C;SQL_ASCII;postgres;postgres;GEAMTEST6V;cupmuser;V%JG:DBAmYeVqyXIBLbU:WfaqfY_L;)
MSI (s) (44:24) [16:01:39:465]: Creating MSIHANDLE (3) of type 790536 
for thread 3620
MSI (s) (44:20) [16:01:39:465]: Invoking remote custom action. DLL: 
C:\WINDOWS\Installer\MSI21.tmp, Entrypoint: runini...@4
MSI (s) (44!C0) [16:01:39:668]: Creating MSIHANDLE (4) of type 790531 
for thread 448

MSI (c) (C4:08) [16:01:39:668]: Note: 1: 2205 2:  3: Error
MSI (c) (C4:08) [16:01:39:668]: Note: 1: 2228 2:  3: Error 4: SELECT 
`Message` FROM `Error` WHERE `Error` = 2867

DEBUG: Error 2867:  The error dialog property is not set
The installer has encountered an unexpected error installing this 
package. This may indicate a problem with this package. The error code 
is 2867. The arguments are: , ,
MSI (c) (C4:08) [16:01:39:668]: Font created.  Charset: Req=0, Ret=0, 
Font: Req=MS Shell Dlg, Ret=MS Shell Dlg

...
DEBUG: Error 2888:  Executing the TextStyle view failed
The installer has encountered an unexpected error installing this 
package. This may indicate a problem with this package. The error code 
is 2888. The arguments are: TextStyle, ,

Failed to run initdb: 128!
Please see the logfile in 'e:\pgsql\tmp\initdb.log'.
...


It might be that the file permissions aren't actually working. If the 
postgres user still exists, can you check it has permission to write 
to e:\pgsql\data and also to acccess e:\pgsql and e:\ too (doesn't 
need write access).


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Failed to run initdb: 128

2010-03-05 Thread Richard Huxton

On 04/03/10 20:43, Niranjan Maturi (nmaturi) wrote:

Hi

Thanks for the reply. Yes, I saw that 2769 is related to locales. On
the machine, I saw a new locale called C is installed. I am not sure
how it got installed. By default, this locale was getting selected. I
did select English - united states and try to install. It also failed
with the same error.


The C locale is a built-in one that just orders by the numeric 
binary-representation of the characters in a string. Standard C style 
ordering.



We are using 8.2.4. There is a later version, but we tested our
applciation thoroughly with 8.2.4 and so we currently bundle our
applciation with this version only.


Try a later version. New features are not introduced in minor updates, 
but bug-fixes are. Check the release-notes for the latest version back 
to 8.2.4 and unless you are relying on the incorrect behaviour of a bug, 
it is worth keeping up-to-date.


http://www.postgresql.org/docs/8.2/static/release-8-2-15.html


Do you think uninstalling this locale would fix this issue?


No. Can you post the end of the installer log-file you get? Or 
preferably the whole file.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] need some advanced books on Postgres

2010-03-05 Thread Richard Huxton

On 05/03/10 01:27, Thomas wrote:

sigh,I didn't find a book with enough internal topics.


You're never going to see a book covering the PostgreSQL internals. 
You'd sell (at most) 100 copies and need to do major updates once a 
year. It'd be several months work to write and only a handful of people 
are really qualified to do so.


Like Adrian said - read the docs, and a polite question on the hackers 
list will always get a polite response (although it might not be instant 
- bear in mind people are in different timezones and they have a release 
to get out).


There's also the developer side of the website and wiki.
  http://www.postgresql.org/developer/
  http://developer.postgresql.org/index.php/Main_Page
  http://wiki.postgresql.org/wiki/Development_information

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restore Data Encountered the ERROR: literal carriage return found in data Error

2010-03-05 Thread Richard Huxton

On 04/03/10 23:52, Wang, Mary Y wrote:

Hi All,

After a pg_dumpall, I'm trying to restore with psql -e template1 -f
21.bak/tmp/out21.bak .  I'm trying to migrate to Postgres 8.3.8.


Are you running pg_dumpall from version 8.3.8?


I'm getting lots errors like this one: psql:21.bak:340557: ERROR:
literal carriage return found in data HINT:  Use \r to represent
carriage return.


I'd only expect this if an old pg_dumpall was being used.

After doing some research, one person posted here :

http://forums.devshed.com/postgresql-help-21/restoring-data-from-pg-7-1-3-7-4-release-99865.html
and this person said sed 's/^M/\\r/' alldump.sql  alldump2.sql
where ^M is entered by pressing Ctrl+V then Enter. 

I'm confused why Ctrl+V?  I thought that is a paste function key.


That'll be a terminal or shell escape sequence of some sort. It will 
convert the following keypress into a control-code that will get 
displayed as ^M (ctrl+M = ascii 13 = CR).


You could just do: sed 's/\r/\\r/' ... though

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Foreign key behavior different in a function and outside

2010-03-05 Thread Richard Huxton

On 05/03/10 06:45, Mridula Mahadevan wrote:

Thanks for the response Tom. I am running postgres 8.3.7.

Yes, his is a highly simplified version, but I also didn't get the
column name right.



The core issue is the foreign key reference being deleted even though
there is no cascade delete defined. Thanks again.


Doesn't do it here. And I wouldn't expect it to.

This was using the precise text of your example (with the corrected 
funciton, which seems to contain an un-needed vSql variable, and with a 
RAISE NOTICE line to make sure the function was called.).


It's possible you've found a bug, but more likely that there's something 
unusual in your setup that you don't know about.


Two questions:

1. Are you running this actual test, on a newly created database?

2. Previously you said the following:


This even with no cascade delete, will succeed and child records are

 deleted from C. This does not happen on all my set ups but has been
 happening consistently on more than one.  Any known issue here?

Are you saying the test you sent us doesn't behave the same on different 
installations, or your actual application doesn't behave the same?


I'm guessing that you either:
1. Have a trigger you don't know about.
2. Have another function of the same name, but in a different schema 
that is being called by mistake. Add a RAISE NOTICE to the function to 
find out.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] The REAL cost of joins

2010-03-04 Thread Richard Huxton

On 03/03/10 21:59, Marcin Krol wrote:

What do you think of this? And in general: when (if?) should one
denormalize data?


As a last resort. No sooner.

The support costs of denormalising your database is such that if you can 
reasonably just buy more hardware / add caching / etc, do so.


Oh, and I'm afraid your tests are probably meaningless.
1. The times are too small to be accurate.
2. You have the overhead of starting psql and connecting to the 
database, starting a new backend etc.

3. You have the overhead of parsing the query
4. You're fetching all rows (which presumably aren't many) - not a 
terribly useful scenario.


If you wanted to measure actual join costs, you'd need to repeat the 
tests (say) 100-1000 times in a loop, optionally with prepared plans. 
Varying WHERE clauses might be useful too, if that's how your real 
application will work.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Scratching my head why results are different between machines.

2010-03-04 Thread Richard Huxton

On 04/03/10 01:35, Craig Ringer wrote:

Argh - a follow-up re ODBC 32 and 64 bit-ness:

http://blog.danovich.com.au/2010/02/02/odbc-settings-on-64-bit-servers/

It's way crazier than you'd ever expect. Both the 32-bit and 64-bit
versions are called odbcad32.exe but are in different (but not
distinctively named) locations.

http://support.microsoft.com/kb/942976/en-us


Classy. Even better - according to the linked page, the 64 bit version 
is in the System32 folder - yippee!


* The 32-bit version of the Odbcad32.exe file is located in the 
%systemdrive%\Windows\SysWoW64 folder.
* The 64-bit version of the Odbcad32.exe file is located in the 
%systemdrive%\Windows\System32 folder.




--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Failed to run initdb: 128

2010-03-04 Thread Richard Huxton

On 04/03/10 00:12, Niranjan Maturi (nmaturi) wrote:

Hi

I am trying to install Postgres 8.2 on a virtual machine that has
Windown 2003 SP2. The installation fails with the error Failed to run
initdb: 128. I searched the archives and looks like it is a catch-all
error. But is there anything specific that we can do to resolve this
issue?

I am logged in as an administrator, so no problem with permissions.
Postgres was able to create the windows user account properly and after
that it fails with the above error. It tells us to copy a log file
initdb.log, but that file does not exist.


It clearly didn't get that far in the process.


There is another log file pginstall.log, where I do see some error
messages, I printed them below. Any help is appreciated.


Well, searching on error 2769 just seems to say a custom installer 
step failed. In this case clearly GetAvailableLocales - nothing 
unusual with your locale setup, is there?



MSI (c) (C4:64) [15:59:30:242]: Note: 1: 2769 2: GetAvailableLocales 3:
1
MSI (c) (C4:64) [15:59:30:242]: Note: 1: 2205 2:  3: Error
MSI (c) (C4:64) [15:59:30:242]: Note: 1: 2228 2:  3: Error 4: SELECT
`Message` FROM `Error` WHERE `Error` = 2769
DEBUG: Error 2769:  Custom Action GetAvailableLocales did not close 1
MSIHANDLEs.
The installer has encountered an unexpected error installing this
package. This may indicate a problem with this package. The error code
is 2769. The arguments are: GetAvailableLocales, 1,


 {1F701DBD-1660-4108-B10A-FB435EA63BF0} Version: 8.2.0 Attributes: 0
 PatchId: Native BaselineId: -

This isn't version 8.2.0 you're installing, is it? Because 8.2.15 is the 
current release for that version.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] createdb but revoke dropdb

2010-03-03 Thread Richard Huxton

On 02/03/10 18:22, Ben Eliott wrote:

I have two roles, 'adminuser' with createdb permission, and 'dbuser' a
user with CRUD privileges.

adminuser is a member of the dbuser role, this seems to allow adminuser
to createdb databases for dbuser with:
createdb -U adminuser -O dbuser new_database_name
Adding .pgpass to the linux user's home directory allows createdb to
work without additional user input.

But now it seems the linux user also has dropdb privileges. How can i
restrict this?
Perhaps there is a recommended method to disable dropdb? Can anyone
suggest?


From the SQL reference page for GRANT
The right to drop an object, or to alter its definition in any way, is 
not treated as a grantable privilege; it is inherent in the owner, and 
cannot be granted or revoked. (However, a similar effect can be obtained 
by granting or revoking membership in the role that owns the object; see 
below.) The owner implicitly has all grant options for the object, too.


Don't make dbuser the owner of the database, make adminuser the 
owner, then grant whatever top-level privileges dbuser needs. Make sure 
you don't have adminuser as an automatic login through .pgpass



The adminuser has no login privileges so by removing dropdb this should
remove the possibility for any hacker chaos other than creating more
databases?


Or deleting/modifying all your data, presumably. If you don't trust the 
linux user account, don't give it automatic login.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FSM and VM file

2010-03-03 Thread Richard Huxton

On 02/03/10 21:47, akp geek wrote:

I am doing a replication of one of the database and the size of the slave
database is growing exponentially . Right now the size of master db is 849M
and the slave is 7GB.



my master is 8.3 and slave is 8.4


I'm guessing your slave isn't being vacuumed. Or, perhaps you have a 
long-running transaction on the slave that is preventing dead rows from 
being cleaned up.


Two useful commands:

  vacuum verbose mytable;

This will show you how many rows/pages can/should be cleaned up.

  SELECT pg_size_pretty( pg_total_relation_size('mytable') );

This will show you the size of mytable (formatted nicely).

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FSM and VM file

2010-03-03 Thread Richard Huxton

On 03/03/10 13:32, akp geek wrote:

Thank you all for the suggestions. I did a vacuum and the size has gone down
drastically. But still it is not the same size as my master. I am looking
into it


Check your autovacuum settings and you should be able to keep things 
stable at least.


You might need to cluster tables / restart the replication to get the 
best case. Vacuuming needs to be a continual process.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] disable triggers isolated to transaction only?

2010-03-03 Thread Richard Huxton

On 03/03/10 15:46, Greg Sabino Mullane wrote:

ALTER TABLE will lock and block, but I'd be remiss if I didn't point
out the use of session_replication_role as a much better solution to
this particular class of problem. (Even if your version does not
support it, Vick, it should be noted here for the archives). The
session_replication_role was added in 8.3:

http://www.postgresql.org/docs/8.3/interactive/sql-altertable.html


That wouldn't have occurred to me. Definitely worth adding to the archives.

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cast char to number

2010-02-24 Thread Richard Huxton

On 24/02/10 20:06, Raymond O'Donnell wrote:

However, to address your immediate problem, you could try something like
this:

(i)   Create a new column of type numeric or integer as appropriate.
(ii)  update your_table set new_column = CAST(trim(both ' 0' from
old_column) as numeric)
(iii) Drop the old column, as well as any constraints depending on it.


Or, in any recent version of PG you can do this via ALTER TABLE
  http://www.postgresql.org/docs/8.4/static/sql-altertable.html

ALTER TABLE t ALTER COLUMN c TYPE integer USING c::integer;

You might want to clean up the values before doing this.

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cast char to number

2010-02-24 Thread Richard Huxton

On 24/02/10 20:27, Joshua D. Drake wrote:

On Wed, 2010-02-24 at 20:22 +, Richard Huxton wrote:

On 24/02/10 20:06, Raymond O'Donnell wrote:

However, to address your immediate problem, you could try something like
this:

(i)   Create a new column of type numeric or integer as appropriate.
(ii)  update your_table set new_column = CAST(trim(both ' 0' from
old_column) as numeric)
(iii) Drop the old column, as well as any constraints depending on it.


Or, in any recent version of PG you can do this via ALTER TABLE
http://www.postgresql.org/docs/8.4/static/sql-altertable.html

ALTER TABLE t ALTER COLUMN c TYPE integer USING c::integer;

You might want to clean up the values before doing this.


That won't work in this case. char() can't be cast to int/numeric. Not
only that it isn't possible to clean up the data in table because char
automatically pads.

postgres=# alter table foo alter column id type numeric;
ERROR:  column id cannot be cast to type pg_catalog.numeric


Well if it's actually char(10) or somesuch you need to do a little 
more I grant you (though not much). I was assuming varchar myself.


richardh= CREATE TABLE intastext (i char(10));
CREATE TABLE
richardh= INSERT INTO intastext (i) VALUES ('1'), 
('02'),('3.0'),('3.5'),('X');

INSERT 0 5
richardh= SELECT * FROM intastext ;
 i

 1
 02
 3.0
 3.5
 X
(5 rows)

richardh= ALTER TABLE intastext ALTER COLUMN i TYPE integer USING 
i::text::numeric::integer;

ERROR:  invalid input syntax for type numeric: X
richardh= DELETE FROM intastext WHERE i = 'X';
DELETE 1
richardh= ALTER TABLE intastext ALTER COLUMN i TYPE integer USING 
i::text::numeric::integer;

ALTER TABLE
richardh= SELECT * FROM intastext ;
 i
---
 1
 2
 3
 4
(4 rows)

Of course USING can have any expression to convert the type.

richardh= CREATE FUNCTION my_map(char(10)) RETURNS integer AS $$ SELECT 
CASE WHEN $1='0' AND $1='9' THEN $1::numeric::integer ELSE -999 END; 
$$ LANGUAGE SQL;

CREATE FUNCTION
richardh= ALTER TABLE intastext ALTER COLUMN i TYPE integer USING 
my_map(i);ALTER TABLE

richardh= SELECT * FROM intastext ;
  i
--
1
2
3
4
 -999
(5 rows)

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Curious plperl behavior

2010-02-24 Thread Richard Huxton

On 24/02/10 20:55, Tom Lane wrote:

Jeffthres...@threshar.is-a-geek.com  writes:

[ oracular excerpt from perlref ]


So is this just a dark corner of Perl, or is plperl doing something to
help you get confused?  In particular, do we need to add anything to
the plperl documentation?  We're not trying to explain Perl to people,
but if plperl is doing something that contributes to this, maybe it
requires documentation.


It is documented.

http://www.postgresql.org/docs/8.4/static/plperl-funcs.html

Note:  The use of named nested subroutines is dangerous in Perl, 
especially if they refer to lexical variables in the enclosing scope. 
Because a PL/Perl function is wrapped in a subroutine, any named 
subroutine you create will be nested. In general, it is far safer to 
create anonymous subroutines which you call via a coderef. See the 
perldiag man page for more details.



There's two ways to read that:
1. Dangerous in Perl - well, what isn't?
2. Dangerous in Perl - blimey, if they think it's dangerous, it must 
make lion-wrestling safe.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Curious plperl behavior

2010-02-24 Thread Richard Huxton

On 24/02/10 21:34, Tom Lane wrote:

Richard Huxtond...@archonet.com  writes:

On 24/02/10 20:55, Tom Lane wrote:

but if plperl is doing something that contributes to this, maybe it
requires documentation.



It is documented.



http://www.postgresql.org/docs/8.4/static/plperl-funcs.html



Hmm.  Jeff found some relevant material on perlref.  Should that link be
added?  Should the link(s) be more specific than telling you to read the
whole d*mn man page?  Neither of those pages are short, and each contains
a wealth of material that isn't related to this issue.


Hmm - perhaps a suggestion to google for perl nested named subroutine. 
That seems to give a set of relevant results. Includes perldiag, 
perlref, our mailing lists and Apache's mod_perl (which makes sense).


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cast char to number

2010-02-24 Thread Richard Huxton

On 24/02/10 22:03, Bill Moran wrote:

Then, a year later
you find out that the serial number is really just a number, and you
actually want to be able to do math on it because you can find out
the year the part was designed by dividing by 1000 or something.

You make the best decisions you can based on the available information.
If you get it wrong, there's always ALTER TABLE :)


Coming in 9.1:

ALTER CUSTOMER ... SET REQUIREMENTS ...

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   4   5   6   7   8   9   10   >