Re: [GENERAL] Segmentation fault during restoration of compressed(.gz) database

2009-09-17 Thread Richard Huxton
sulmansarwar wrote:
 Hi,
 
 I am new to PostgreSQL. I have been trying to restore a compressed(.gz)
 database using
 
 gunzip -c filename.gz | psql dbname
 
 After some 8 or 9 tables are restored the program exists giving error:
 Segmentation Fault.

 Exception Type:  EXC_BAD_ACCESS (SIGSEGV)
 Exception Codes: KERN_INVALID_ADDRESS at 0x014f
 Crashed Thread:  0
 
 Thread 0 Crashed:
 0   psql0xd3b5 gets_fromFile + 296
 1   psql0xdf73 MainLoop + 613
 2   psql0x00010d7a main + 2394
 3   psql0x3fc2 _start + 216
 4   psql0x3ee9 start + 41
 
 Thread 0 crashed with X86 Thread State (32-bit):
   eax: 0x014f  ebx: 0xd29c  ecx: 0x  edx: 0x0150
   edi: 0x00302866  esi: 0xb674  ebp: 0xb3b8  esp: 0xbfffef80
ss: 0x001f  efl: 0x00010216  eip: 0xd3b5   cs: 0x0017
ds: 0x001f   es: 0x001f   fs: 0x   gs: 0x0037
   cr2: 0x014f

If I'm reading that stack-trace correctly, it's crashed while reading
commands from STDIN (MainLoop  gets_fromFile). It's unlikely that
something so basic has a bug, which suggests something else is
scribbling over a pointer that gets_fromFile is using.

People smarter than me will have to provide more help, but I can tell
you some of the details they'll want.

1. Exact version of PostgreSQL and how you installed it (package,
compiled from source etc). Try the output of pg_config too.

2. Anything unusual about the table it fails to restore (unusual data
types, very large data values, that sort of thing).

3. Is it reproducible from a decompressed version of filename?

4. Can you reduce it to a short example that can be tested on other
machines?

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Segmentation fault during restoration of compressed(.gz) database

2009-09-17 Thread Richard Huxton
Ignore me - I missed the previous thread with the same question.

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unicode normalization

2009-09-17 Thread Sam Mason
On Thu, Sep 17, 2009 at 12:01:57AM -0400, Alvaro Herrera wrote:
 http://wiki.postgresql.org/wiki/Strip_accents_from_strings

I'm still confused as to why plpython doesn't know the server's encoding
already; seems as though all text operations are predicated on knowing
this and hence all but the most trivial code has to go out of its way to
be correct with respect to this.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unicode normalization

2009-09-17 Thread Andreas Kalsch
My standard encoding is UTF-8 on all levels so I don't need this 
high-cost call:


plpy.execute(select setting from pg_settings where name = 
'server_encoding');


Additionally I want to get the original cases.

For this purpose my solution is still fitting to my need. But it is not 
the one you have cited below, but:


CREATE OR REPLACE FUNCTION simplify (str text)
RETURNS text
AS $$
import unicodedata

s = unicodedata.normalize('NFKD', str.decode('UTF-8'))
s = ''.join(c for c in s if unicodedata.combining(c) == 0)
return s.encode('UTF-8')
$$ LANGUAGE plpythonu;

Andi


2) Transfering this to PL/Python:

CREATE OR REPLACE FUNCTION test (str text)
 RETURNS text
AS $$
   import unicodedata
   return unicodedata.normalize('NFKD', str.decode('UTF-8'))
$$ LANGUAGE plpythonu;


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re[2]: [GENERAL] Problems with pg_dump for PG8.4 for WinXP (MinGW build)

2009-09-17 Thread el dorado

 2009/9/15 el dorado do_ra...@mail.ru:
  Hello!
  I need PG 8.4 built from source code for WinXP. So I got archive 
  postgresql-8.4.1.tar.gz, unpacked it and built postgres by MinGW.
  Everything seeds to be fine until we tried to test pg_dump. It failed (not 
  always but often).
  Command:
  pg_dump -U postgres -v -f mydump.sql TEST_DB
  And here we get a message 'an unhandled win32 exception occured in 
  pg_dump.exe'
 
 Given that, it seems to be something with your mingw setup causing it.
 I think you'll have to get yourself a backtrace using gdb (the native
 debugger tools can usually not get usable backtraces from a a mingw
 build).
  Magnus Hagander

Hello!
Thank you very much for your answer. I try to use gdb now. But there is a 
little difficulty. I'm not quite sure it is connected with postgres build. 
Probably I do something wrong using gdb but maybe you could show me the 
direction for resolving the problem?
I built PostgreSQL 8.4 (WinXP, MinGW) with following configuration options:
configure --without-zlib --enable-debug --enable-cassert --enable-depend
So I supposed the result should include debugging symbols.
But I don't see the function names in the backtrace. It looks like:
(gdb) bt
#0 0x7c8106e9 in ?? ()
#1 0x in ?? ()
(gdb)
Maybe you could advise me how to get more detailed information?
I'll appreciate any help.

Regard, Marina.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] What kind of JOIN, if any?

2009-09-17 Thread Paul M Foster
Folks:

I can't find a way to do this purely with SQL. Any help would be
appreciated.

Table 1: urls

id | url
--
1  | alfa
2  | bravo
3  | charlie
4  | delta

Table 2: access

userid | url_id
---
paulf  | 1
paulf  | 2
nancyf | 2
nancyf | 3

The access table is related to the url table via url_id = id.

Here's what I want as a result of a query: I want all the records of the
url table, one row for each record, plus the userid field that goes with
it, for a specified user (paulf), with NULLs as needed, like this:

userid | url
-
paulf  | alfa
paulf  | bravo
   | charlie
   | delta

I can do *part* of this with various JOINs, but the moment I specify
userid = 'paulf', I don't get the rows with NULLs.

Again, any help would be appreciated.

Paul

-- 
Paul M. Foster

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What kind of JOIN, if any?

2009-09-17 Thread Serge Fonville
Hi,

I'd look into outer joins

http://www.postgresql.org/docs/8.1/static/tutorial-join.html


 I can do *part* of this with various JOINs, but the moment I specify
 userid = 'paulf', I don't get the rows with NULLs


If you want all fields from one table and only those matching from another
use outer join

HTH

Regards,

Serge Fonville

On Thu, Sep 17, 2009 at 4:29 PM, Paul M Foster pa...@quillandmouse.comwrote:

 Folks:

 I can't find a way to do this purely with SQL. Any help would be
 appreciated.

 Table 1: urls

 id | url
 --
 1  | alfa
 2  | bravo
 3  | charlie
 4  | delta

 Table 2: access

 userid | url_id
 ---
 paulf  | 1
 paulf  | 2
 nancyf | 2
 nancyf | 3

 The access table is related to the url table via url_id = id.

 Here's what I want as a result of a query: I want all the records of the
 url table, one row for each record, plus the userid field that goes with
 it, for a specified user (paulf), with NULLs as needed, like this:

 userid | url
 -
 paulf  | alfa
 paulf  | bravo
   | charlie
   | delta

 I can do *part* of this with various JOINs, but the moment I specify
 userid = 'paulf', I don't get the rows with NULLs.

 Again, any help would be appreciated.

 Paul

 --
 Paul M. Foster

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] What kind of JOIN, if any?

2009-09-17 Thread Sam Mason
On Thu, Sep 17, 2009 at 10:29:11AM -0400, Paul M Foster wrote:
 I want all the records of the
 url table, one row for each record, plus the userid field that goes with
 it, for a specified user (paulf), with NULLs as needed

Maybe something like this?

  SELECT a.userid, u.url
  FROM urls u
LEFT JOIN access a ON u.id = a.url_id AND a.userid = 'paulf';

 I can do *part* of this with various JOINs, but the moment I specify
 userid = 'paulf', I don't get the rows with NULLs.

I guess you were putting userid = 'paulf' into the WHERE clause,
that's the wrong place.  It needs to be up in the ON clause.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What kind of JOIN, if any?

2009-09-17 Thread Mark Styles
On Thu, Sep 17, 2009 at 10:29:11AM -0400, Paul M Foster wrote:
 I can't find a way to do this purely with SQL. Any help would be
 appreciated.
 
 Table 1: urls
 
 id | url
 --
 1  | alfa
 2  | bravo
 3  | charlie
 4  | delta
 
 Table 2: access
 
 userid | url_id
 ---
 paulf  | 1
 paulf  | 2
 nancyf | 2
 nancyf | 3
 
 The access table is related to the url table via url_id = id.
 
 Here's what I want as a result of a query: I want all the records of the
 url table, one row for each record, plus the userid field that goes with
 it, for a specified user (paulf), with NULLs as needed, like this:
 
 userid | url
 -
 paulf  | alfa
 paulf  | bravo
| charlie
| delta
 
 I can do *part* of this with various JOINs, but the moment I specify
 userid = 'paulf', I don't get the rows with NULLs.

SELECT userid, url
FROM   urls
LEFT OUTER JOIN (select * from access where userid = 'paulf') AS access
ON access.url_id = urls.id;

-- 
Mark 
http://www.lambic.co.uk



signature.asc
Description: Digital signature


Re: [GENERAL] What kind of JOIN, if any?

2009-09-17 Thread Paul M Foster
On Thu, Sep 17, 2009 at 04:20:57PM +0100, Sam Mason wrote:

 On Thu, Sep 17, 2009 at 10:29:11AM -0400, Paul M Foster wrote:
  I want all the records of the
  url table, one row for each record, plus the userid field that goes with
  it, for a specified user (paulf), with NULLs as needed
 
 Maybe something like this?
 
   SELECT a.userid, u.url
   FROM urls u
 LEFT JOIN access a ON u.id = a.url_id AND a.userid = 'paulf';
 
  I can do *part* of this with various JOINs, but the moment I specify
  userid = 'paulf', I don't get the rows with NULLs.
 
 I guess you were putting userid = 'paulf' into the WHERE clause,
 that's the wrong place.  It needs to be up in the ON clause.

You da man. That is the answer; it worked. Thanks very much.

(BTW, on your website, the link from Simple Report Generator to
http://samason.me.uk/~sam/reportgen/ is broken.)

Paul

-- 
Paul M. Foster

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Index Usage in View with Aggregates

2009-09-17 Thread Ian Harding
I have never had this particular problem in PostgreSQL, it seems to
just know when queries can be flattened and indexes used.  I know
that takes tons of work.  Thank you for that.

Here's the Oracle question.

http://stackoverflow.com/questions/1439500/oracle-index-usage-in-view-with-aggregates

I suspect I have made some subtle mistake in changing that SQL into a
view that is breaking Oracle's ability to know that an index can be
used.  Any and all advice is appreciated.

- Ian

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index Usage in View with Aggregates

2009-09-17 Thread Scott Marlowe
On Thu, Sep 17, 2009 at 9:55 AM, Ian Harding harding@gmail.com wrote:
 I have never had this particular problem in PostgreSQL, it seems to
 just know when queries can be flattened and indexes used.  I know
 that takes tons of work.  Thank you for that.

 Here's the Oracle question.

 http://stackoverflow.com/questions/1439500/oracle-index-usage-in-view-with-aggregates

 I suspect I have made some subtle mistake in changing that SQL into a
 view that is breaking Oracle's ability to know that an index can be
 used.  Any and all advice is appreciated.

You're far more likely to get a good answer on an oracle forum

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What kind of JOIN, if any?

2009-09-17 Thread Paul M Foster
On Thu, Sep 17, 2009 at 11:23:12AM -0400, Mark Styles wrote:

 On Thu, Sep 17, 2009 at 10:29:11AM -0400, Paul M Foster wrote:
  I can't find a way to do this purely with SQL. Any help would be
  appreciated.
  
  Table 1: urls
  
  id | url
  --
  1  | alfa
  2  | bravo
  3  | charlie
  4  | delta
  
  Table 2: access
  
  userid | url_id
  ---
  paulf  | 1
  paulf  | 2
  nancyf | 2
  nancyf | 3
  
  The access table is related to the url table via url_id = id.
  
  Here's what I want as a result of a query: I want all the records of the
  url table, one row for each record, plus the userid field that goes with
  it, for a specified user (paulf), with NULLs as needed, like this:
  
  userid | url
  -
  paulf  | alfa
  paulf  | bravo
 | charlie
 | delta
  
  I can do *part* of this with various JOINs, but the moment I specify
  userid = 'paulf', I don't get the rows with NULLs.
 
 SELECT userid, url
 FROM   urls
 LEFT OUTER JOIN (select * from access where userid = 'paulf') AS access
 ON access.url_id = urls.id;

Another good suggestion. Thanks.

Paul

-- 
Paul M. Foster

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] COPY binary

2009-09-17 Thread Nathaniel
Hello,

When using PQputCopyData and PQgetCopyData to send and receive binary data from 
postgres, would you include/expect headers and trailers (as well as the tuples 
themselves) as you would in a binary file named 'file_name' if you were 
executing the SQL COPY BINARY table_name FROM/TO 'file_name'?

Nathaniel





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Multiple counts on criteria - Approach to a problem

2009-09-17 Thread Neil Saunders
Hi all,

I maintain an online property rental application. The main focus of the UI
is the search engine, which I'd now like to improve by allowing filtering of
the search results shown on some criteria, but provide a count of the number
of properties that meet that criteria.

For example, we're looking all properties, no criteria. I'd like to show
something like:

Bedrooms:
1 Bedroom (122)
2 Bedrooms (143)
3 Bedrooms (88)

Facilities
BBQ (232)
Pool (122)

...and so on. My question is simple - What's the best way to implement this
- Do I literally have to execute a count for the WHERE criteria with the
filter criteria tagged on, or is there some clever trick that I'm not aware
of? I'd rather not count in the application as I'd like to plan for the day
we have up to 100k properties (

Any suggestions gratefully received!

Ta,

Neil.


Re: [GENERAL] COPY binary

2009-09-17 Thread Tom Lane
Nathaniel napt...@yahoo.co.uk writes:
 When using PQputCopyData and PQgetCopyData to send and receive binary data 
 from postgres, would you include/expect headers and trailers (as well as the 
 tuples themselves) as you would in a binary file named 'file_name' if you 
 were executing the SQL COPY BINARY table_name FROM/TO 'file_name'?

Yes.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple counts on criteria - Approach to a problem

2009-09-17 Thread Ben Chobot

Neil Saunders wrote:

Hi all,

I maintain an online property rental application. The main focus of 
the UI is the search engine, which I'd now like to improve by allowing 
filtering of the search results shown on some criteria, but provide a 
count of the number of properties that meet that criteria.


For example, we're looking all properties, no criteria. I'd like to 
show something like:


Bedrooms:
1 Bedroom (122)
2 Bedrooms (143)
3 Bedrooms (88)

Facilities
BBQ (232)
Pool (122)

...and so on. My question is simple - What's the best way to implement 
this - Do I literally have to execute a count for the WHERE criteria 
with the filter criteria tagged on, or is there some clever trick that 
I'm not aware of? I'd rather not count in the application as I'd like 
to plan for the day we have up to 100k properties (


You could try using materialized views. Your churn is probably low 
enough that it would make sense to update the views every time a 
property is (de)listed.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple counts on criteria - Approach to a problem

2009-09-17 Thread Adam Rich

Neil Saunders wrote:

Hi all,

I maintain an online property rental application. The main focus of the 
UI is the search engine, which I'd now like to improve by allowing 
filtering of the search results shown on some criteria, but provide a 
count of the number of properties that meet that criteria.



 (snip)


...and so on. My question is simple - What's the best way to implement 
this - Do I literally have to execute a count for the WHERE criteria 
with the filter criteria tagged on, or is there some clever trick that 
I'm not aware of? I'd rather not count in the application as I'd like to 
plan for the day we have up to 100k properties (


Any suggestions gratefully received!



Here's the structure you want:

select
sum(case bedrooms when 1 then 1 else 0 end) as br1,
sum(case bedrooms when 2 then 1 else 0 end) as br2,
sum(case bedrooms when 3 then 1 else 0 end) as br3,
sum(case has_bbq when 1 then 1 else 0 end) as bbq,
sum(case has_pool when 1 then 1 else 0 end) as pool
from properties

in other words, you can put the criteria inside a case statement that 
returns a 0 or 1, and use sum() over that case to count the rows that 
returned a 1.


Adam







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] creation of foreign key without checking prior data?

2009-09-17 Thread hubert depesz lubaczewski
Hi,
would it be possible to add a way to create foreign key without checking
of prior data?

Before you will say it's a bad idea, because then you might get invalid
data - i know. You can geet invalid data in column that is checked by
foriegn key even now - by temporarily disablnig triggers and/or writing
special (or bad) triggers.

So, since (as we know) foreign keys are not fault-proof, wouldn't it be
good to provide a way to create them without all this time-consuming
check? It would come handy for example when loading dumps (disabling
fkey for load is bad option, because you need first to create it, before
you can disable it, and to create it you need unique constraint on
referenced table, which required index - and i don't want index to be
created before data is loaded, because it's just too slow.

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple counts on criteria - Approach to a problem

2009-09-17 Thread Bill Moran
In response to Neil Saunders n.j.saund...@gmail.com:

 Hi all,
 
 I maintain an online property rental application. The main focus of the UI
 is the search engine, which I'd now like to improve by allowing filtering of
 the search results shown on some criteria, but provide a count of the number
 of properties that meet that criteria.
 
 For example, we're looking all properties, no criteria. I'd like to show
 something like:
 
 Bedrooms:
 1 Bedroom (122)
 2 Bedrooms (143)
 3 Bedrooms (88)
 
 Facilities
 BBQ (232)
 Pool (122)
 
 ...and so on. My question is simple - What's the best way to implement this
 - Do I literally have to execute a count for the WHERE criteria with the
 filter criteria tagged on, or is there some clever trick that I'm not aware
 of? I'd rather not count in the application as I'd like to plan for the day
 we have up to 100k properties (
 
 Any suggestions gratefully received!

In addition to the other excellent suggestions, you can provide estimates
at a very small cost by using explain:

EXPLAIN SELECT count(*) FROM properties WHERE bedrooms=3;

Then parse the explain output to get the estimated # of rows.  Very low
overhead, but of course it's only an estimate.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 'Weird' errors

2009-09-17 Thread Martin Gainty

experiencing weird 14001 errors .no logs not evt nothing to go by so i 
completely rebuilt the cygwin postgres

ran cygwin
then i ran the initdb as follows

$ ./postgresql-8.4.1/src/bin/initdb/initdb.exe -D /pgsql/data
The files belonging to this database system will be owned by user postgres.
This user must also own the server process.

The database cluster will be initialized with locale C.
The default database encoding has accordingly been set to SQL_ASCII.
The default text search configuration will be set to english.

creating directory /pgsql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... sh: line 1:  2752 Bad system call
/cygdrive/c/Postgres/postgresql-8.4.1/src/bin/initdb/postgres.exe --boot -x0 -
F -c max_connections=100 -c shared_buffers=1000  /dev/null  /dev/null 21

snipthese Bad System calls go on for pages .../snip

400kB
creating configuration files ... ok
creating template1 database in /pgsql/data/base/1 ... child process was 
terminated by signal 12
initdb: removing data directory /pgsql/data

#so lets try just one system call at command line and determine where 
the error is
postg...@desktop 
/cygdrive/c/Postgres$ 
/cygdrive/c/Postgres/postgresql-8.4.1/src/bin/initdb/postgres.exe --boot -x0-F 
-c max_connections=100 -c shared_buffers=1000  /dev/null  /dev/null 21

no error

???why does initdb choke on these system calls but cygwin command line 
works fine?

Postgres versionpostgresql-8.4.1
$ cygcheck -V
cygcheck version 1.90.4.1
System Checker for Cygwin
Copyright 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006 Red Hat, Inc.
Compiled on Jun 12 2008

apologies in advance for posting to wrong list
Martin Gainty 
__ 
Verzicht und Vertraulichkeitanmerkung/Note de déni et de confidentialité
 
Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene Empfaenger 
sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte Weiterleitung 
oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht dient lediglich dem 
Austausch von Informationen und entfaltet keine rechtliche Bindungswirkung. 
Aufgrund der leichten Manipulierbarkeit von E-Mails koennen wir keine Haftung 
fuer den Inhalt uebernehmen.
Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le 
destinataire prévu, nous te demandons avec bonté que pour satisfaire informez 
l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci est 
interdite. Ce message sert à l'information seulement et n'aura pas n'importe 
quel effet légalement obligatoire. Étant donné que les email peuvent facilement 
être sujets à la manipulation, nous ne pouvons accepter aucune responsabilité 
pour le contenu fourni.



_
Hotmail: Free, trusted and rich email service.
http://clk.atdmt.com/GBL/go/171222984/direct/01/

Re: [GENERAL] creation of foreign key without checking prior data?

2009-09-17 Thread Peter Hunsberger
On Thu, Sep 17, 2009 at 11:40 AM, hubert depesz lubaczewski
dep...@depesz.com wrote:

 So, since (as we know) foreign keys are not fault-proof, wouldn't it be
 good to provide a way to create them without all this time-consuming
 check?

No.

If you don't want the behavior of a foreign key then just don't define
a foreign key. Load the data, clean it up, then create the foreign key

-- 
Peter Hunsberger

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] creation of foreign key without checking prior data?

2009-09-17 Thread hubert depesz lubaczewski
On Thu, Sep 17, 2009 at 12:31:14PM -0500, Peter Hunsberger wrote:
 On Thu, Sep 17, 2009 at 11:40 AM, hubert depesz lubaczewski
 dep...@depesz.com wrote:
 
  So, since (as we know) foreign keys are not fault-proof, wouldn't it be
  good to provide a way to create them without all this time-consuming
  check?
 
 No.
 
 If you don't want the behavior of a foreign key then just don't define
 a foreign key. Load the data, clean it up, then create the foreign key

I think you are missing the point. Data is clean. It's dump. creation of
fkey takes time, and i'd like to avoid this delay.

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_restore -j

2009-09-17 Thread Scott Marlowe
I'm trying to do a parallel restore with pg_restore -j but I'm only
seeing one CPU being used really.   The file is custom format, but was
made by pg_dump for pgsql 8.3.  Is that a problem? Do I need a backup
made with 8.4 to run parallel restore?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore -j

2009-09-17 Thread Joshua D. Drake
On Thu, 2009-09-17 at 11:48 -0600, Scott Marlowe wrote:
 I'm trying to do a parallel restore with pg_restore -j but I'm only
 seeing one CPU being used really.   The file is custom format, but was
 made by pg_dump for pgsql 8.3.  Is that a problem? Do I need a backup
 made with 8.4 to run parallel restore?

Yes I believe but I don't recall. You could dump the TOC and note
differences.

 
-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
If the world pushes look it in the eye and GRR. Then push back harder. - 
Salamander


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore -j

2009-09-17 Thread Scott Marlowe
On Thu, Sep 17, 2009 at 12:00 PM, Joshua D. Drake j...@commandprompt.com 
wrote:
 On Thu, 2009-09-17 at 11:48 -0600, Scott Marlowe wrote:
 I'm trying to do a parallel restore with pg_restore -j but I'm only
 seeing one CPU being used really.   The file is custom format, but was
 made by pg_dump for pgsql 8.3.  Is that a problem? Do I need a backup
 made with 8.4 to run parallel restore?

 Yes I believe but I don't recall. You could dump the TOC and note
 differences.

I kinda figured, I'm making a dump with pg84 now to test with.  I'm
really hoping for a noticeable improvement in restore times, as we're
in the 1.5 to 2 hour range right now.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore -j

2009-09-17 Thread Joshua D. Drake
On Thu, 2009-09-17 at 12:05 -0600, Scott Marlowe wrote:
 On Thu, Sep 17, 2009 at 12:00 PM, Joshua D. Drake j...@commandprompt.com 
 wrote:
  On Thu, 2009-09-17 at 11:48 -0600, Scott Marlowe wrote:
  I'm trying to do a parallel restore with pg_restore -j but I'm only
  seeing one CPU being used really.   The file is custom format, but was
  made by pg_dump for pgsql 8.3.  Is that a problem? Do I need a backup
  made with 8.4 to run parallel restore?
 
  Yes I believe but I don't recall. You could dump the TOC and note
  differences.
 
 I kinda figured, I'm making a dump with pg84 now to test with.  I'm
 really hoping for a noticeable improvement in restore times, as we're
 in the 1.5 to 2 hour range right now.
 

If you have the concurrency and disk IO, you should get that down below
30 minutes.

Joshua D. Drake


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
If the world pushes look it in the eye and GRR. Then push back harder. - 
Salamander


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore -j

2009-09-17 Thread Scott Marlowe
On Thu, Sep 17, 2009 at 12:12 PM, Joshua D. Drake j...@commandprompt.com 
wrote:
 On Thu, 2009-09-17 at 12:05 -0600, Scott Marlowe wrote:
 On Thu, Sep 17, 2009 at 12:00 PM, Joshua D. Drake j...@commandprompt.com 
 wrote:
  On Thu, 2009-09-17 at 11:48 -0600, Scott Marlowe wrote:
  I'm trying to do a parallel restore with pg_restore -j but I'm only
  seeing one CPU being used really.   The file is custom format, but was
  made by pg_dump for pgsql 8.3.  Is that a problem? Do I need a backup
  made with 8.4 to run parallel restore?
 
  Yes I believe but I don't recall. You could dump the TOC and note
  differences.

 I kinda figured, I'm making a dump with pg84 now to test with.  I'm
 really hoping for a noticeable improvement in restore times, as we're
 in the 1.5 to 2 hour range right now.


 If you have the concurrency and disk IO, you should get that down below
 30 minutes.

On our two big servers we have 12 Disk RAID-10 for pgdata, and 2 disk
RAID-1 for pg_xlog, and 8 cores.  What's a good -j number to start at
there?  I'm leaning towards 8 or 10 or 12 for testing.  Woohoo late
night testing.  :)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] creation of foreign key without checking prior data?

2009-09-17 Thread Peter Hunsberger
On Thu, Sep 17, 2009 at 12:44 PM, hubert depesz lubaczewski
dep...@depesz.com wrote:
 On Thu, Sep 17, 2009 at 12:31:14PM -0500, Peter Hunsberger wrote:
 On Thu, Sep 17, 2009 at 11:40 AM, hubert depesz lubaczewski
 dep...@depesz.com wrote:
 
  So, since (as we know) foreign keys are not fault-proof, wouldn't it be
  good to provide a way to create them without all this time-consuming
  check?

 No.

 If you don't want the behavior of a foreign key then just don't define
 a foreign key. Load the data, clean it up, then create the foreign key

 I think you are missing the point. Data is clean. It's dump. creation of
 fkey takes time, and i'd like to avoid this delay.

You can't have a foreign key that doesn't have relational integrity,
it is no longer a foreign key.  If you don't want the delay then don't
define the key, at least until some point at which you can take the
delay.  If there is never such a time then your operational scenario
needs changing, not Postgres...

-- 
Peter Hunsberger

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore -j

2009-09-17 Thread Joshua D. Drake
On Thu, 2009-09-17 at 12:15 -0600, Scott Marlowe wrote:
 On Thu, Sep 17, 2009 at 12:12 PM, Joshua D. Drake j...@commandprompt.com 
 wrote:
  On Thu, 2009-09-17 at 12:05 -0600, Scott Marlowe wrote:
  On Thu, Sep 17, 2009 at 12:00 PM, Joshua D. Drake j...@commandprompt.com 
  wrote:
   On Thu, 2009-09-17 at 11:48 -0600, Scott Marlowe wrote:
   I'm trying to do a parallel restore with pg_restore -j but I'm only
   seeing one CPU being used really.   The file is custom format, but was
   made by pg_dump for pgsql 8.3.  Is that a problem? Do I need a backup
   made with 8.4 to run parallel restore?
  
   Yes I believe but I don't recall. You could dump the TOC and note
   differences.
 
  I kinda figured, I'm making a dump with pg84 now to test with.  I'm
  really hoping for a noticeable improvement in restore times, as we're
  in the 1.5 to 2 hour range right now.
 
 
  If you have the concurrency and disk IO, you should get that down below
  30 minutes.
 
 On our two big servers we have 12 Disk RAID-10 for pgdata, and 2 disk
 RAID-1 for pg_xlog, and 8 cores.  What's a good -j number to start at
 there?  I'm leaning towards 8 or 10 or 12 for testing.  Woohoo late
 night testing.  :)

I found 1.5*num_cpus to be most beneficial but it obviously depends on
data set etc..

Joshua D. Drake


 
-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
If the world pushes look it in the eye and GRR. Then push back harder. - 
Salamander


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Row estimates on empty table

2009-09-17 Thread Alban Hertroys

Hello all,

I'm seeing something strange with the row-estimates on an empty table.  
The table in question is merely a template-table that specialised  
tables inherit from, it will never contain any data. Nevertheless,  
after importing my creation script and vacuum analyse the result I see  
is this:


dalroi=# SELECT * FROM ONLY unit;
 unit | format | scales_up | scales_down
--++---+-
(0 rows)

dalroi=# EXPLAIN ANALYZE SELECT * FROM ONLY unit;
QUERY PLAN
--
 Seq Scan on unit  (cost=0.00..18.50 rows=850 width=66) (actual  
time=0.001..0.001 rows=0 loops=1)

 Total runtime: 0.025 ms
(2 rows)

As you see, estimated rows 850, actual rows 0!

Now 25 µs doesn't sound like much, but this data is going to be joined  
to another small table and it's throwing the estimated number of rows  
WAY off. See here: http://explain-analyze.info/query_plans/3956-Alban-s-unit-normalization-query-1


(Yes, 4 ms still isn't bad, but these queries are likely going to be  
at the basis of many other queries so they need to be snap-snap-snap!  
The more joins the worse the estimate will get, right?)


So what's going on here?

For the record, this is PG 8.4 compiled from macports on Snow Leopard.  
I've seen a few odd reports with that combination so I thought I'd  
mention it. To be exact: PostgreSQL 8.4.0 on i386-apple-darwin10.0.0,  
compiled by GCC i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc.  
build 5646), 64-bit


Alban Hertroys

--
Screwing up is the best way to attach something to the ceiling.


!DSPAM:737,4ab280e511031155049759!



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] creation of foreign key without checking prior data?

2009-09-17 Thread hubert depesz lubaczewski
On Thu, Sep 17, 2009 at 01:22:52PM -0500, Peter Hunsberger wrote:
 You can't have a foreign key that doesn't have relational integrity,
 it is no longer a foreign key.  If you don't want the delay then don't
 define the key, at least until some point at which you can take the
 delay.  If there is never such a time then your operational scenario
 needs changing, not Postgres...

you do realize that having foreign key defined doesn't guarantee
integrity?

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 'Weird' errors

2009-09-17 Thread Richard Huxton
Martin Gainty wrote:
 experiencing weird 14001 errors .no logs not evt nothing to go by so i 
 completely rebuilt the cygwin postgres
 
 ran cygwin
 then i ran the initdb as follows
 
 $ ./postgresql-8.4.1/src/bin/initdb/initdb.exe -D /pgsql/data
 The files belonging to this database system will be owned by user postgres.
 This user must also own the server process.
 
 The database cluster will be initialized with locale C.
 The default database encoding has accordingly been set to SQL_ASCII.
 The default text search configuration will be set to english.
 
 creating directory /pgsql/data ... ok
 creating subdirectories ... ok
 selecting default max_connections ... sh: line 1:  2752 Bad system call
 /cygdrive/c/Postgres/postgresql-8.4.1/src/bin/initdb/postgres.exe --boot 
 -x0 -
 F -c max_connections=100 -c shared_buffers=1000  /dev/null  /dev/null 
 21

Googling cygwin bad system call gets me this:
  http://people.maths.ox.ac.uk/~nichol/cruftscraper/cygwinenv.shtml

The  CYGWIN environment variable
...
(no)server - if set, allows client applications to use the Cygserver
facilities. This option must be enabled explicitely on the client side,
otherwise your applications won't be able to use the XSI IPC function
calls (msgget, semget, shmget, and friends) successfully. These function
calls will return with ENOSYS, Bad system call.

Looks like this environment setting differs in the two environments.

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] limiting query time and/or RAM

2009-09-17 Thread Alan McKay
Is there any way to limit a query to a certain amount of RAM and / or
certain runtime?

i.e. automatically kill it if it exceeds either boundary?

We've finally narrowed down our system crashes and have a smoking gun,
but no way to fix it in the immediate term.  This sort of limit would
really help us.

-- 
“Don't eat anything you've ever seen advertised on TV”
 - Michael Pollan, author of In Defense of Food

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] limiting query time and/or RAM

2009-09-17 Thread Scott Marlowe
On Thu, Sep 17, 2009 at 12:56 PM, Alan McKay alan.mc...@gmail.com wrote:
 Is there any way to limit a query to a certain amount of RAM and / or
 certain runtime?

 i.e. automatically kill it if it exceeds either boundary?

 We've finally narrowed down our system crashes and have a smoking gun,
 but no way to fix it in the immediate term.  This sort of limit would
 really help us.

Generally speaking work_mem limits ram used.  What are your
non-default postgresql.conf settings?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] limiting query time and/or RAM

2009-09-17 Thread Alan McKay
 Generally speaking work_mem limits ram used.  What are your
 non-default postgresql.conf settings?

This cannot be right because we had queries taking 4G and see our
setting is such :

work_mem = 2MB  # min 64kB

I'll have to find a copy of the default file to figure out what my
non-defaults are

-- 
“Don't eat anything you've ever seen advertised on TV”
 - Michael Pollan, author of In Defense of Food

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] limiting query time and/or RAM

2009-09-17 Thread Scott Marlowe
On Thu, Sep 17, 2009 at 1:19 PM, Alan McKay alan.mc...@gmail.com wrote:
 Generally speaking work_mem limits ram used.  What are your
 non-default postgresql.conf settings?

 This cannot be right because we had queries taking 4G and see our
 setting is such :

Are you sure they were using that much memory?  If you see this in TOP:

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 8528 postgres  16   0 8458m 359m 349m R 38.7  1.1   0:04.46 postgres:

You do know that the query here is using 359-349 Megs, right, not 8G.

 work_mem = 2MB                          # min 64kB

 I'll have to find a copy of the default file to figure out what my
 non-defaults are

Just look for things that have no # in front of them.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] creation of foreign key without checking prior data?

2009-09-17 Thread Sam Mason
On Thu, Sep 17, 2009 at 08:34:20PM +0200, hubert depesz lubaczewski wrote:
 On Thu, Sep 17, 2009 at 01:22:52PM -0500, Peter Hunsberger wrote:
  You can't have a foreign key that doesn't have relational integrity,
  it is no longer a foreign key.
 
 you do realize that having foreign key defined doesn't guarantee
 integrity?

The obvious cases would be software bugs and bad hardware.  What else?
Huh, how about users scribbling over PG's files!  Not sure where to
classify that but could either happen maliciously or accidentally as the
result of trying to clean up.

By having an override here you seem to be saying that you ultimately
trust yourself more than PG and/or the hardware its running on.  I
suppose the trade off is time you *may* spend cleaning up later on if
this isn't true vs. the time PG *will* spend verifying the constraint
now.  Interesting trade off, never really considered it before.

Sounds valid, though the general mantra here is that PG knows best.  Is
that always true?

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 'Weird' errors

2009-09-17 Thread Martin Gainty

thanks for the prompt response
i took the stack of bad calls and placed them in a .sh and they all ran 
flawlessly..
there is a delta in there somewhere 

I have a followup (if i may)
I am able to get past the initdb  but when i run the postgres on the data 
folder I get no postgresql.conf found specifically
postgres cannot access the server configuration file 
/cygdrive/c/Postgres/pgsql/data/postgresql.conf'

is there a foolproof way that I will allow postgres to access postgresql.conf?

thanks!
Martin Gainty 
__ 
Verzicht und Vertraulichkeitanmerkung/Note de déni et de confidentialité
 
Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene Empfaenger 
sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte Weiterleitung 
oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht dient lediglich dem 
Austausch von Informationen und entfaltet keine rechtliche Bindungswirkung. 
Aufgrund der leichten Manipulierbarkeit von E-Mails koennen wir keine Haftung 
fuer den Inhalt uebernehmen.
Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le 
destinataire prévu, nous te demandons avec bonté que pour satisfaire informez 
l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci est 
interdite. Ce message sert à l'information seulement et n'aura pas n'importe 
quel effet légalement obligatoire. Étant donné que les email peuvent facilement 
être sujets à la manipulation, nous ne pouvons accepter aucune responsabilité 
pour le contenu fourni.




 Date: Thu, 17 Sep 2009 19:35:30 +0100
 From: d...@archonet.com
 To: mgai...@hotmail.com
 CC: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] 'Weird' errors
 
 Martin Gainty wrote:
  experiencing weird 14001 errors .no logs not evt nothing to go by so i 
  completely rebuilt the cygwin postgres
  
  ran cygwin
  then i ran the initdb as follows
  
  $ ./postgresql-8.4.1/src/bin/initdb/initdb.exe -D /pgsql/data
  The files belonging to this database system will be owned by user 
  postgres.
  This user must also own the server process.
  
  The database cluster will be initialized with locale C.
  The default database encoding has accordingly been set to SQL_ASCII.
  The default text search configuration will be set to english.
  
  creating directory /pgsql/data ... ok
  creating subdirectories ... ok
  selecting default max_connections ... sh: line 1:  2752 Bad system call
  /cygdrive/c/Postgres/postgresql-8.4.1/src/bin/initdb/postgres.exe --boot 
  -x0 -
  F -c max_connections=100 -c shared_buffers=1000  /dev/null  /dev/null 
  21
 
 Googling cygwin bad system call gets me this:
   http://people.maths.ox.ac.uk/~nichol/cruftscraper/cygwinenv.shtml
 
 The  CYGWIN environment variable
 ...
 (no)server - if set, allows client applications to use the Cygserver
 facilities. This option must be enabled explicitely on the client side,
 otherwise your applications won't be able to use the XSI IPC function
 calls (msgget, semget, shmget, and friends) successfully. These function
 calls will return with ENOSYS, Bad system call.
 
 Looks like this environment setting differs in the two environments.
 
 -- 
   Richard Huxton
   Archonet Ltd
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

_
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
http://clk.atdmt.com/GBL/go/171222985/direct/01/

Re: [GENERAL] limiting query time and/or RAM

2009-09-17 Thread Bill Moran
In response to Scott Marlowe scott.marl...@gmail.com:

 On Thu, Sep 17, 2009 at 12:56 PM, Alan McKay alan.mc...@gmail.com wrote:
  Is there any way to limit a query to a certain amount of RAM and / or
  certain runtime?
 
  i.e. automatically kill it if it exceeds either boundary?
 
  We've finally narrowed down our system crashes and have a smoking gun,
  but no way to fix it in the immediate term.  This sort of limit would
  really help us.
 
 Generally speaking work_mem limits ram used.  What are your
 non-default postgresql.conf settings?

work_mem limits memory usage _per_sort_.

A big query can easily have many sorts.  Each sort will be limited to 
work_mem memory usage, but the total could be much higher.

The only way I can think is to set a per-process limit in the OS and allow
the OS to kill a process when it gets out of hand.  Not ideal, though.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] limiting query time and/or RAM

2009-09-17 Thread Scott Marlowe
On Thu, Sep 17, 2009 at 1:31 PM, Bill Moran wmo...@potentialtech.com wrote:
 In response to Scott Marlowe scott.marl...@gmail.com:

 On Thu, Sep 17, 2009 at 12:56 PM, Alan McKay alan.mc...@gmail.com wrote:
  Is there any way to limit a query to a certain amount of RAM and / or
  certain runtime?
 
  i.e. automatically kill it if it exceeds either boundary?
 
  We've finally narrowed down our system crashes and have a smoking gun,
  but no way to fix it in the immediate term.  This sort of limit would
  really help us.

 Generally speaking work_mem limits ram used.  What are your
 non-default postgresql.conf settings?

 work_mem limits memory usage _per_sort_.

 A big query can easily have many sorts.  Each sort will be limited to
 work_mem memory usage, but the total could be much higher.

 The only way I can think is to set a per-process limit in the OS and allow
 the OS to kill a process when it gets out of hand.  Not ideal, though.

True, but with a work_mem of 2M, I can't imagine having enough sorting
going on to need 4G of ram.  (2000 sorts? That's a lot)  I'm betting
the OP was looking at top and misunderstanding what the numbers mean,
which is pretty common really.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] limiting query time and/or RAM

2009-09-17 Thread Scott Marlowe
On Thu, Sep 17, 2009 at 1:35 PM, Scott Marlowe scott.marl...@gmail.com wrote:
 On Thu, Sep 17, 2009 at 1:31 PM, Bill Moran wmo...@potentialtech.com wrote:
 In response to Scott Marlowe scott.marl...@gmail.com:

 On Thu, Sep 17, 2009 at 12:56 PM, Alan McKay alan.mc...@gmail.com wrote:
  Is there any way to limit a query to a certain amount of RAM and / or
  certain runtime?
 
  i.e. automatically kill it if it exceeds either boundary?
 
  We've finally narrowed down our system crashes and have a smoking gun,
  but no way to fix it in the immediate term.  This sort of limit would
  really help us.

 Generally speaking work_mem limits ram used.  What are your
 non-default postgresql.conf settings?

 work_mem limits memory usage _per_sort_.

 A big query can easily have many sorts.  Each sort will be limited to
 work_mem memory usage, but the total could be much higher.

 The only way I can think is to set a per-process limit in the OS and allow
 the OS to kill a process when it gets out of hand.  Not ideal, though.

 True, but with a work_mem of 2M, I can't imagine having enough sorting
 going on to need 4G of ram.  (2000 sorts? That's a lot)  I'm betting
 the OP was looking at top and misunderstanding what the numbers mean,
 which is pretty common really.

Note in followup, the danger is when pgsql looks at a hashagg subplan,
and thinks that'll fit in work_mem and goes ahead but in reality it
needs 1,000 times or more work_mem for such a plan, and exhausts
memory.  But to believe that's happening, I'll need to see what the OP
saw to convince him it was happening.  It's not unheard of, but it's
not that common either.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Substitutes for some Oracle packages

2009-09-17 Thread Arnold, Sandra
We are in the process of migrating from Oracle to PostgreSQL.  One of the 
things that we are needing to find out is what to use in place of Oracle 
supplied functionality such as DBMS_OUTPUT and UTL_FILE.  We are currently 
using this type of functionality in Stored Procedures and packages.  What are 
the options in PostgreSQL for replacing these two packages in a stored 
procedure/function?

Thanks,

Sandra Arnold
Sr. DBA
DOE/OSTI
Oak Ridge, TN


Re: [GENERAL] 'Weird' errors

2009-09-17 Thread Richard Huxton
Martin Gainty wrote:
 thanks for the prompt response
 i took the stack of bad calls and placed them in a .sh and they all ran 
 flawlessly..
 there is a delta in there somewhere 
 
 I have a followup (if i may)
 I am able to get past the initdb  but when i run the postgres on the data 
 folder I get no postgresql.conf found specifically
 postgres cannot access the server configuration file 
 /cygdrive/c/Postgres/pgsql/data/postgresql.conf'
 
 is there a foolproof way that I will allow postgres to access postgresql.conf?

Is there some permissions problem? Is that what you mean by cannot access?

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Substitutes for some Oracle packages

2009-09-17 Thread Sam Mason
On Thu, Sep 17, 2009 at 03:53:36PM -0400, Arnold, Sandra wrote:
 We are in the process of migrating from Oracle to PostgreSQL.  One of
 the things that we are needing to find out is what to use in place of
 Oracle supplied functionality such as DBMS_OUTPUT and UTL_FILE.

For those of us who use PG and not Oracle a description of the
functionality you need would help, the artifacts of your current
implementation are less helpful.  That said:

  plpgsql can RAISE NOTICE, which looks similar to DBMS_OUTPUT

  most untrusted scripting languages (i.e. plperl or plpython) can
  touch the filesystem, which is what UTL_FILE seems to be about

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] limiting query time and/or RAM

2009-09-17 Thread Scott Marlowe
On Thu, Sep 17, 2009 at 1:56 PM, Alan McKay alan.mc...@gmail.com wrote:
 On Thu, Sep 17, 2009 at 3:35 PM, Scott Marlowe scott.marl...@gmail.com 
 wrote:
 True, but with a work_mem of 2M, I can't imagine having enough sorting
 going on to need 4G of ram.  (2000 sorts? That's a lot)  I'm betting
 the OP was looking at top and misunderstanding what the numbers mean,
 which is pretty common really.

 Our databases are pretty big, and our queries pretty complex.

 Here is a snippet from last night's fun, leaving in a few normal
 rows, and the 3 errant ones which were an order of magnitude bigger

 The ps man page does not seem to say what the DRS field is.  One of

It means Data Resident Size.  If it's like RES in top, it means the
Total memory being access.  You'd need to subtract however much of
your shared_buffers it's touching to know how much it's really using.
Which is why top is so handy, it shows both RES and SHR next to each
other.

 our DB guys read it as such.  May well be misreading, but the fact is
 we had a few queries running that were an order of magnitude bigger
 than others, and once we isloated this this morning we were able to
 reproduce the problem in our test environment, and hang it.   Just
 prior to this happening, Munin shows committed memory spikes from
 about 1.5G to 18G which equals RAM + SWAP

 ps -U postgres -v

  PID TTY      STAT   TIME  MAJFL   TRS   DRS   RSS %MEM COMMAND
  1064 ?        Ss     0:01      0  3562 636289 7232  0.0 postgres:
 foobar pgdb001 192.168.3.151(46867) idle

 14235 ?        Ss    29:41      0  3562 6316881 4852556 29.5 postgres:
 foobar pgdb001 192.168.2.66(60421) SELECT

I'm gonna make a SWAG that you've got 4 to 4.5G shared buffers, and if
you subract that from DRS you'll find it's using a few hundred to
several hundred megs.  still a lot, but not in the 4G range you're
expecting.  What does top say about this?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] limiting query time and/or RAM

2009-09-17 Thread Sam Mason
On Thu, Sep 17, 2009 at 03:56:09PM -0400, Alan McKay wrote:
 Our databases are pretty big, and our queries pretty complex.

How big is big and how complex is complex?

An EXPLAIN (EXPLAIN ANALYSE if it's not going to hurt things) of some of
your common queries would help a lot here.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Substitutes for some Oracle packages

2009-09-17 Thread Arnold, Sandra
DBMS_OUTPUT is used to either display output or write output to a file.  
UTL_FILE is used to open a file and then write data to a file.  Most of the 
time we use these two packages to create log files from PL/SQL stored 
procedures/packages. 

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Sam Mason
Sent: Thursday, September 17, 2009 4:04 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Substitutes for some Oracle packages

On Thu, Sep 17, 2009 at 03:53:36PM -0400, Arnold, Sandra wrote:
 We are in the process of migrating from Oracle to PostgreSQL.  One of
 the things that we are needing to find out is what to use in place of
 Oracle supplied functionality such as DBMS_OUTPUT and UTL_FILE.

For those of us who use PG and not Oracle a description of the
functionality you need would help, the artifacts of your current
implementation are less helpful.  That said:

  plpgsql can RAISE NOTICE, which looks similar to DBMS_OUTPUT

  most untrusted scripting languages (i.e. plperl or plpython) can
  touch the filesystem, which is what UTL_FILE seems to be about

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] limiting query time and/or RAM

2009-09-17 Thread Alan McKay
 I'm gonna make a SWAG that you've got 4 to 4.5G shared buffers, and if
 you subract that from DRS you'll find it's using a few hundred to
 several hundred megs.  still a lot, but not in the 4G range you're
 expecting.  What does top say about this?

I've just add this in my cronjob with top -b -n 1 -u postgres.
Hopefully I won't let you know tomorrow :-)

listen_addresses = '*'   # what IP address(es) to listen on;
port = 5432 # (change requires restart)
max_connections = 350   # (change requires restart)
shared_buffers = 500MB  # min 128kB or max_connections*16kB
work_mem = 2MB  # min 64kB
maintenance_work_mem = 128MB# min 1MB
max_fsm_pages = 153600  # min max_fsm_relations*16, 6 bytes each
fsync = on  # turns forced synchronization on or off
wal_sync_method = fsync  # the default is the first option
full_page_writes = on   # recover from partial page writes
wal_buffers = 1MB   # min 32kB
commit_delay = 0# range 0-10, in microseconds
checkpoint_segments = 16# in logfile segments, min 1, 16MB each
archive_mode = on   # allows archiving to be done
archive_command = 'test ! -f /data/pgsql/backups/wal_arch/%f.gz  cp
%p  /var/lib/pgsql/backups/wal_arch/%f'  # command to use to archive a
logfile segment
archive_timeout = 0  # force a logfile segment switch after this
effective_cache_size = 10GB
default_statistics_target = 100  # range 1-1000
logging_collector = on   # Enable capturing of stderr and csvlog
log_directory = 'pg_log'# directory where log files are written,
log_connections = off
log_disconnections = off
log_hostname = off
log_line_prefix = '%t:%...@%r:%p' # special values:
autovacuum = off# Enable autovacuum subprocess?  'on'
search_path = 
'$user,public,quant001,dbprc001,price001,instrument001,client001'
  # schema names
vacuum_freeze_min_age = 10
datestyle = 'iso, mdy'
lc_messages = 'en_US.utf8'   # locale for system error message
lc_monetary = 'en_US.utf8'   # locale for monetary formatting
lc_numeric = 'en_US.utf8'# locale for number formatting
lc_time = 'en_US.utf8'   # locale for time formatting
default_text_search_config = 'pg_catalog.english'
custom_variable_classes = 'olap'
olap.rownum_name = 'default'


-- 
“Don't eat anything you've ever seen advertised on TV”
 - Michael Pollan, author of In Defense of Food

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] haversine formula with postgreSQL

2009-09-17 Thread Scott Marlowe
On Thu, Sep 17, 2009 at 1:16 PM, Jonathan jharah...@gmail.com wrote:
 Hi!

 I am looking at the PHP/MySQL Google Maps API store locator example
 here:

 http://code.google.com/apis/maps/articles/phpsqlsearch.html

 And I'm trying to get this to work with PostgreSQL instead of MySQL.

 I've (slightly) modified the haversine formula part of my PHP script
 but I keep getting this error:

 Invalid query: ERROR: column distance does not exist LINE
 1: ...ude ) ) ) ) AS distance FROM aaafacilities HAVING distance ...
 ^

 I'm new to this, but it doesn't look like I need to create a column in
 my table for distance, or at least the directions didn't say to create
 a distance column.

 Here is my PHP with SQL:
 $query = sprintf(SELECT 'ID', 'FACILITY', 'ADDRESS', latitude,
 longitude, ( 3959 * acos( cos( radians('%s') ) * cos( radians
 ( latitude ) ) * cos( radians( longitude ) - radians('%s') ) + sin
 ( radians('%s') ) * sin( radians( latitude ) ) ) ) AS distance FROM
 aaafacilities HAVING distance  '%s' ORDER BY dist LIMIT 0 OFFSET 20,
  pg_escape_string($center_lat),
  pg_escape_string($center_lng),
  pg_escape_string($center_lat),
  pg_escape_string($radius));

 Does anyone have any ideas on how I can get this to work?  I'm not
 sure what is wrong, since it doesn't seem like I need to create a
 distance column and when I do create one, I get this:

Is that really the whole query?  Why a having with no group by?

Can you do me a favor and print out $query instead of the php stuff?
It might help you as well to troubleshoot to see the real query.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Substitutes for some Oracle packages

2009-09-17 Thread Scott Marlowe
On Thu, Sep 17, 2009 at 1:53 PM, Arnold, Sandra arno...@osti.gov wrote:
 We are in the process of migrating from Oracle to PostgreSQL.  One of the
 things that we are needing to find out is what to use in place of Oracle
 supplied functionality such as DBMS_OUTPUT and UTL_FILE.  We are
 currently using this type of functionality in Stored Procedures and
 packages.  What are the options in PostgreSQL for replacing these two
 packages in a stored procedure/function?

Not being that familiar with all the packages oracle comes with, what
do those packages do?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] limiting query time and/or RAM

2009-09-17 Thread Alan McKay
On Thu, Sep 17, 2009 at 3:35 PM, Scott Marlowe scott.marl...@gmail.com wrote:
 True, but with a work_mem of 2M, I can't imagine having enough sorting
 going on to need 4G of ram.  (2000 sorts? That's a lot)  I'm betting
 the OP was looking at top and misunderstanding what the numbers mean,
 which is pretty common really.

Our databases are pretty big, and our queries pretty complex.

Here is a snippet from last night's fun, leaving in a few normal
rows, and the 3 errant ones which were an order of magnitude bigger

The ps man page does not seem to say what the DRS field is.  One of
our DB guys read it as such.  May well be misreading, but the fact is
we had a few queries running that were an order of magnitude bigger
than others, and once we isloated this this morning we were able to
reproduce the problem in our test environment, and hang it.   Just
prior to this happening, Munin shows committed memory spikes from
about 1.5G to 18G which equals RAM + SWAP

ps -U postgres -v

  PID TTY  STAT   TIME  MAJFL   TRS   DRS   RSS %MEM COMMAND
 1064 ?Ss 0:01  0  3562 636289 7232  0.0 postgres:
foobar pgdb001 192.168.3.151(46867) idle

14235 ?Ss29:41  0  3562 6316881 4852556 29.5 postgres:
foobar pgdb001 192.168.2.66(60421) SELECT
14491 ?Ss 0:01  0  3562 636545 7284  0.0 postgres:
foobar pgdb001 192.168.2.66(55705) SELECT

14889 ?Rs29:36 12  3562 6316937 4876228 29.6 postgres:
foobar pgdb001 192.168.2.62(48275) SELECT

14940 ?Ss 0:00  0  3562 636845 7912  0.0 postgres:
foobar pgdb001 192.168.2.62(43561) SELECT
14959 ?Rs29:34 16  3562 6315141 4885224 29.7 postgres:
foobar pgdb001 192.168.2.62(48314) SELECT
14985 ?Ss 0:01  0  3562 636545 7288  0.0 postgres:
foobar pgdb001 192.168.2.66(55946) SELECT


-- 
“Don't eat anything you've ever seen advertised on TV”
 - Michael Pollan, author of In Defense of Food

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] NAS

2009-09-17 Thread Marco Fortina

Hello there!

A customer of our company would like to create a 2 nodes cluster connected to 
an external NAS storage. I would like to know if PostgreSQL supports its 
datafiles on this kind of storage and if this solution is certified.

Thanks

Marco Fortina
Senior Consultant
Mobile:+39 348 5246730

BSC Consulting S.p.A.
Gruppo Terasystem
Tel. +39 010 64301.1
Fax. +39 010 64301.700
http://www.bsc.it

Please consider the environment before printing this email
The information in this e-mail and in any attachments is confidential and may 
be privileged. If you aren’t the intended recipient, please destroy this 
message, delete any copies held on your systems and notify the sender 
immediately. You shouldn’t retain, copy or use this e-mail for any purpose, nor 
disclose all or any part of its content to any other person.



Re: [GENERAL] NAS

2009-09-17 Thread Scott Marlowe
On Thu, Sep 17, 2009 at 8:06 AM, Marco Fortina marco.fort...@bsc.it wrote:

 Hello there!

 A customer of our company would like to create a 2 nodes cluster connected
 to an external NAS storage. I would like to know if PostgreSQL supports its
 datafiles on this kind of storage and if this solution is certified.

If you're thinking a shared data directory, no, it won't work.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Substitutes for some Oracle packages

2009-09-17 Thread Pavel Stehule
Hello,

2009/9/17 Arnold, Sandra arno...@osti.gov:
 We are in the process of migrating from Oracle to PostgreSQL.  One of the
 things that we are needing to find out is what to use in place of Oracle
 supplied functionality such as DBMS_OUTPUT and UTL_FILE.  We are
 currently using this type of functionality in Stored Procedures and
 packages.  What are the options in PostgreSQL for replacing these two
 packages in a stored procedure/function?
 Thanks,

-
for typical  use you can use RAISE NOTICE without DBMS_OUTPUT,
UTL_FILE isn't supported by core.

Migration process could be simplified by using Orafce package:

http://pgfoundry.org/projects/orafce/
http://orafce.projects.postgresql.org/

regards
Pavel  Stehule

 Sandra Arnold
 Sr. DBA
 DOE/OSTI
 Oak Ridge, TN

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does PG cache results of an aggregate function, (and results of non-volatile functions)?

2009-09-17 Thread Bruce Momjian
Allan Kamau wrote:
 Hi,
 I do have a query which make use of the results of an aggregate
 function (for example bit_or) several times in the output column list
 of the SELECT clause, does PostgreSQL simply execute the aggregate
 function only once and provide the output to the other calls to the
 same aggregate function.
 How about the case of non volatile functions? Do they get executed as
 many times as they occur in the select clause?

I did not see a reply to this.  Does anyone know the answer?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 'Weird' errors

2009-09-17 Thread Martin Gainty

yes 
i flipped to my regular account and re-created the db with initdb -D newFolder
i have noticed that initdb basically deletes everything ..and the reason for 
doing that would be?
initdb -D dataDir --noclean 
allows initdb to retain the data folder and postgresql.conf configuration file 
contents

thanks,
Martin Gainty 
__ 
Verzicht und Vertraulichkeitanmerkung/Note de déni et de confidentialité
 
Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene Empfaenger 
sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte Weiterleitung 
oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht dient lediglich dem 
Austausch von Informationen und entfaltet keine rechtliche Bindungswirkung. 
Aufgrund der leichten Manipulierbarkeit von E-Mails koennen wir keine Haftung 
fuer den Inhalt uebernehmen.
Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le 
destinataire prévu, nous te demandons avec bonté que pour satisfaire informez 
l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci est 
interdite. Ce message sert à l'information seulement et n'aura pas n'importe 
quel effet légalement obligatoire. Étant donné que les email peuvent facilement 
être sujets à la manipulation, nous ne pouvons accepter aucune responsabilité 
pour le contenu fourni.




 Date: Thu, 17 Sep 2009 20:57:04 +0100
 From: d...@archonet.com
 To: mgai...@hotmail.com
 CC: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] 'Weird' errors
 
 Martin Gainty wrote:
  thanks for the prompt response
  i took the stack of bad calls and placed them in a .sh and they all ran 
  flawlessly..
  there is a delta in there somewhere 
  
  I have a followup (if i may)
  I am able to get past the initdb  but when i run the postgres on the data 
  folder I get no postgresql.conf found specifically
  postgres cannot access the server configuration file 
  /cygdrive/c/Postgres/pgsql/data/postgresql.conf'
  
  is there a foolproof way that I will allow postgres to access 
  postgresql.conf?
 
 Is there some permissions problem? Is that what you mean by cannot access?
 
 -- 
   Richard Huxton
   Archonet Ltd
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

_
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
http://clk.atdmt.com/GBL/go/171222985/direct/01/

[GENERAL] haversine formula with postgreSQL

2009-09-17 Thread Jonathan
Hi!

I am looking at the PHP/MySQL Google Maps API store locator example
here:

http://code.google.com/apis/maps/articles/phpsqlsearch.html

And I'm trying to get this to work with PostgreSQL instead of MySQL.

I've (slightly) modified the haversine formula part of my PHP script
but I keep getting this error:

Invalid query: ERROR: column distance does not exist LINE
1: ...ude ) ) ) ) AS distance FROM aaafacilities HAVING distance ...
^

I'm new to this, but it doesn't look like I need to create a column in
my table for distance, or at least the directions didn't say to create
a distance column.

Here is my PHP with SQL:
$query = sprintf(SELECT 'ID', 'FACILITY', 'ADDRESS', latitude,
longitude, ( 3959 * acos( cos( radians('%s') ) * cos( radians
( latitude ) ) * cos( radians( longitude ) - radians('%s') ) + sin
( radians('%s') ) * sin( radians( latitude ) ) ) ) AS distance FROM
aaafacilities HAVING distance  '%s' ORDER BY dist LIMIT 0 OFFSET 20,
  pg_escape_string($center_lat),
  pg_escape_string($center_lng),
  pg_escape_string($center_lat),
  pg_escape_string($radius));

Does anyone have any ideas on how I can get this to work?  I'm not
sure what is wrong, since it doesn't seem like I need to create a
distance column and when I do create one, I get this:

Invalid query: ERROR: column aaafacilities.latitude must appear in
the GROUP BY clause or be used in an aggregate function

Thanks for any comments or suggestions.  I appreciate it.  I'm new to
this.

Jonathan Harahush

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 'Weird' errors

2009-09-17 Thread Tom Lane
Martin Gainty mgai...@hotmail.com writes:
 i flipped to my regular account and re-created the db with initdb -D newFolder
 i have noticed that initdb basically deletes everything ..and the reason for 
 doing that would be?

Oh?  It should refuse to do anything if the target directory is not
empty; and does act that way, in my testing.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Row estimates on empty table

2009-09-17 Thread Tom Lane
Alban Hertroys dal...@solfertje.student.utwente.nl writes:
 I'm seeing something strange with the row-estimates on an empty table.  

It's intentional that we don't assume an empty table is empty.
Otherwise you could get some spectacularly awful plans if you
create a table, fill it, and immediately query it.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] haversine formula with postgreSQL

2009-09-17 Thread Tom Lane
Jonathan jharah...@gmail.com writes:
 Here is my PHP with SQL:
 $query = sprintf(SELECT 'ID', 'FACILITY', 'ADDRESS', latitude,
 longitude, ( 3959 * acos( cos( radians('%s') ) * cos( radians
 ( latitude ) ) * cos( radians( longitude ) - radians('%s') ) + sin
 ( radians('%s') ) * sin( radians( latitude ) ) ) ) AS distance FROM
 aaafacilities HAVING distance  '%s' ORDER BY dist LIMIT 0 OFFSET 20,

Sigh, you've been misled by MySQL's nonstandard behavior.  You cannot
refer to output columns of a query in its HAVING clause; it's disallowed
per spec and not logically sensible either.  The simplest way to deal
with it is just to repeat the expression in HAVING.  If you really
really don't want to write it twice, you can use a subquery.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 'Weird' errors

2009-09-17 Thread Martin Gainty

i was able to get it sorted out with the latest release these initdb and pg_ctl 
commands as follows
initdb --pgdata=/pgsql/data

#postgresql.conf contents consist of
# this is a comment

#what port to run on
port = 5432

#hostname or address on which the postmaster is to listen for connections from 
client applications
#virtual_host = 'localhost'

#prints a line informing about each successful connection to the server log
log_connections = true

#prefixes each server log message with a timestamp
#log_timestamp = true
#determines how many concurrent connections the database server will allow
max_connections = 3
###end postresql.conf

# START pg_hba.conf
# Allow any user on the local system to connect to any database under
# any database user name using Unix-domain sockets (the default for local
# connections).
#
# TYPE  DATABASEUSERCIDR-ADDRESS  METHOD
local   all all   trust
#END pg_hba.conf

start PG VERSION
8.4
END PG_VERSION file

$ /cygdrive/c/postgres/pgsql/bin/pg_ctl -D /pgsql/data -l logfile start

server starting

thanks tom
Martin 
__ 
Verzicht und Vertraulichkeitanmerkung/Note de déni et de confidentialité
 
Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene Empfaenger 
sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte Weiterleitung 
oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht dient lediglich dem 
Austausch von Informationen und entfaltet keine rechtliche Bindungswirkung. 
Aufgrund der leichten Manipulierbarkeit von E-Mails koennen wir keine Haftung 
fuer den Inhalt uebernehmen.
Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le 
destinataire prévu, nous te demandons avec bonté que pour satisfaire informez 
l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci est 
interdite. Ce message sert à l'information seulement et n'aura pas n'importe 
quel effet légalement obligatoire. Étant donné que les email peuvent facilement 
être sujets à la manipulation, nous ne pouvons accepter aucune responsabilité 
pour le contenu fourni.




 To: mgai...@hotmail.com
 CC: d...@archonet.com; pgsql-general@postgresql.org
 Subject: Re: [GENERAL] 'Weird' errors 
 Date: Thu, 17 Sep 2009 17:25:07 -0400
 From: t...@sss.pgh.pa.us
 
 Martin Gainty mgai...@hotmail.com writes:
  i flipped to my regular account and re-created the db with initdb -D 
  newFolder
  i have noticed that initdb basically deletes everything ..and the reason 
  for doing that would be?
 
 Oh?  It should refuse to do anything if the target directory is not
 empty; and does act that way, in my testing.
 
   regards, tom lane
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

_
Hotmail: Free, trusted and rich email service.
http://clk.atdmt.com/GBL/go/171222984/direct/01/

Re: [GENERAL] limiting query time and/or RAM

2009-09-17 Thread Alan McKay
 An EXPLAIN (EXPLAIN ANALYSE if it's not going to hurt things) of some of
 your common queries would help a lot here.

Yes, we are just about to start getting into that sort of thing.

-- 
“Don't eat anything you've ever seen advertised on TV”
 - Michael Pollan, author of In Defense of Food

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] haversine formula with postgreSQL

2009-09-17 Thread Mike Christensen
This behavior kinda gets me sometimes too, especially in WHERE clauses..

I'm a bit curious as to why this is so bad.  I could see why it would
be expensive to do, since your clause wouldn't be indexed - but why is
the syntax itself not allowed?  Repeating the clause isn't gonna gain
you any speed, and might make it even slower since the expression
would have to be evaluated twice right?  Perhaps I'm missing
something..

Mike

On Thu, Sep 17, 2009 at 2:47 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Jonathan jharah...@gmail.com writes:
 Here is my PHP with SQL:
 $query = sprintf(SELECT 'ID', 'FACILITY', 'ADDRESS', latitude,
 longitude, ( 3959 * acos( cos( radians('%s') ) * cos( radians
 ( latitude ) ) * cos( radians( longitude ) - radians('%s') ) + sin
 ( radians('%s') ) * sin( radians( latitude ) ) ) ) AS distance FROM
 aaafacilities HAVING distance  '%s' ORDER BY dist LIMIT 0 OFFSET 20,

 Sigh, you've been misled by MySQL's nonstandard behavior.  You cannot
 refer to output columns of a query in its HAVING clause; it's disallowed
 per spec and not logically sensible either.  The simplest way to deal
 with it is just to repeat the expression in HAVING.  If you really
 really don't want to write it twice, you can use a subquery.

                        regards, tom lane

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] haversine formula with postgreSQL

2009-09-17 Thread Jonathan
It's the whole query as far as I can tell.  The app takes input from
the user --- the user enters an address and chooses a radius (show me
all facilities within 5 miles of this address) and then the latitude
and longitude of the address and the radius is passed into the query
so that the database can grab all locations within a certain radius.

The example provided by Google is using MySQL.  The query for MySQL
looks like this:

SELECT address, name, lat, lng, ( 3959 * acos( cos( radians('%s') ) *
cos( radians( lat ) ) * cos( radians( lng ) - radians('%s') ) + sin
( radians('%s') ) * sin( radians( lat ) ) ) ) AS distance FROM markers
HAVING distance  '%s' ORDER BY distance LIMIT 0 , 20

And I'm attempting to change it to work with Postgres and have done
this:

SELECT 'ID', 'FACILITY', 'ADDRESS', latitude, longitude, ( 3959 * acos
( cos( radians('%s') ) * cos( radians( latitude ) ) * cos( radians
( longitude ) - radians('%s') ) + sin( radians('%s') ) * sin( radians
( latitude ) ) ) ) AS distance FROM aaafacilities HAVING distance 
'%s' ORDER BY distance LIMIT 0 OFFSET 20


aaafacilities is my table name in my posgres database.

I'm sorry if this isn't enough info.. like I said, I'm new to this but
definitely interested in learning and figuring this out!

From what I can tell, the database is supposed to calculate and then
output the distance of each match but it seems like in the MySQL
example, it can do this without having an actual distance column in
the database.

Thanks again!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] haversine formula with postgreSQL

2009-09-17 Thread Tom Lane
Mike Christensen m...@kitchenpc.com writes:
 This behavior kinda gets me sometimes too, especially in WHERE clauses..
 I'm a bit curious as to why this is so bad.  I could see why it would
 be expensive to do, since your clause wouldn't be indexed - but why is
 the syntax itself not allowed?

It's not logically sensible: per the implicit execution model defined by
the spec, the output list is not computed until after WHERE/HAVING/etc
are evaluated, so it makes no sense to refer to output expressions in
those clauses.  As an example of why the execution ordering is
important, you'd be pretty unhappy if this threw a division-by-zero
error:

select 1/avg(x) from tab group by y having avg(x)  0;

While we could interpret such a reference as meaning to copy the output
expression into the other clause, it doesn't seem like a particularly
good idea to encourage confusion about what the syntax means.

Also, allowing references to output column names here actually creates
an ambiguity: for instance distance could also be a column name
available from some table in the FROM clause.  So it's not exactly
cost-free to allow this; it will likely result in queries being silently
interpreted in some way other than what the author expected.

BTW, there are two cases where Postgres *does* allow such references:

1. In ORDER BY.  This is mainly because the SQL spec used to require it.
It's actually logically consistent because ORDER BY is notionally
executed after forming the output expressions, but it's still confusing.
The spec authors thought better of this idea and removed it in SQL99,
but we're still stuck supporting it for backwards compatibility reasons.

2.  In GROUP BY.  This is, frankly, a mistake, and one I wish we could
have a do-over on.  Again we're stuck with it for compatibility reasons,
but we're not likely to extend the mistake to other clauses.

In both these cases, to reduce the scope for ambiguity problems we only
allow references to output columns as simple ORDER or GROUP list items
(for instance ORDER BY distance but not ORDER BY abs(distance)).
This is all right because it's still frequently useful, but you'd seldom
write a WHERE or HAVING clause that consisted *only* of an output-column
name.  So even if we did extend the behavior it wouldn't help much,
unless we were to fling the doors wide open for ambiguity problems.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] NAS

2009-09-17 Thread John R Pierce

Marco Fortina wrote:


Hello there!

A customer of our company would like to create a 2 nodes cluster 
connected to an external NAS storage. I would like to know if 
PostgreSQL supports its datafiles on this kind of storage and if this 
solution is certified.




active/standby type cluster, where only one server has the database 
software running at a time?  in theory, should work.   NAS storage is 
often NOT recommended for database table storage, however, as it often 
has funny quirks with regards to random access, concurrent access, and 
so forth.   early versions of NFS were especially bad about this.   most 
shared active/standby database high availability clusters use SAN block 
storage for this reason (fiberchannel, iscsi), and those often use 
'fencing hardware' to be sure that the current standby system doesn't 
mount the file system when the other member is active.


afaik, postgresql doesn't 'certify' anything, and certainly most of us 
on this email list do not speak for postgresql.org, we're mostly just users.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] haversine formula with postgreSQL

2009-09-17 Thread Brent Wood
A bit out in left field,

Writing your own haversine in Postgres seems a bit like reinventing a wooden 
wheel when you gan get a free pneumatic one...

Any reason not to just install PostGIS  fully support geometries  projections 
in Postgres?

You can build the geometries provided to the functions on the fly from lat/lon 
coordinates stored as numerics in your SQL, so your DB structures don't even 
have to change if you don't want them to..

http://www.postgis.org/documentation/manual-1.4/ST_Distance_Sphere.html
http://www.postgis.org/documentation/manual-1.4/ST_Distance_Spheroid.html


HTH 

   Brent Wood


Brent Wood
DBA/GIS consultant
NIWA, Wellington
New Zealand
 Scott Marlowe scott.marl...@gmail.com 09/18/09 11:35 AM 
On Thu, Sep 17, 2009 at 1:16 PM, Jonathan jharah...@gmail.com wrote:
 Hi!

 I am looking at the PHP/MySQL Google Maps API store locator example
 here:

 http://code.google.com/apis/maps/articles/phpsqlsearch.html

 And I'm trying to get this to work with PostgreSQL instead of MySQL.

 I've (slightly) modified the haversine formula part of my PHP script
 but I keep getting this error:

 Invalid query: ERROR: column distance does not exist LINE
 1: ...ude ) ) ) ) AS distance FROM aaafacilities HAVING distance ...
 ^

 I'm new to this, but it doesn't look like I need to create a column in
 my table for distance, or at least the directions didn't say to create
 a distance column.

 Here is my PHP with SQL:
 $query = sprintf(SELECT 'ID', 'FACILITY', 'ADDRESS', latitude,
 longitude, ( 3959 * acos( cos( radians('%s') ) * cos( radians
 ( latitude ) ) * cos( radians( longitude ) - radians('%s') ) + sin
 ( radians('%s') ) * sin( radians( latitude ) ) ) ) AS distance FROM
 aaafacilities HAVING distance  '%s' ORDER BY dist LIMIT 0 OFFSET 20,
  pg_escape_string($center_lat),
  pg_escape_string($center_lng),
  pg_escape_string($center_lat),
  pg_escape_string($radius));

 Does anyone have any ideas on how I can get this to work?  I'm not
 sure what is wrong, since it doesn't seem like I need to create a
 distance column and when I do create one, I get this:

Is that really the whole query?  Why a having with no group by?

Can you do me a favor and print out $query instead of the php stuff?
It might help you as well to troubleshoot to see the real query.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

NIWA is the trading name of the National Institute of Water  Atmospheric 
Research Ltd.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] NAS

2009-09-17 Thread Scott Marlowe
On Thu, Sep 17, 2009 at 5:53 PM, John R Pierce pie...@hogranch.com wrote:

 afaik, postgresql doesn't 'certify' anything, and certainly most of us on
 this email list do not speak for postgresql.org, we're mostly just users.

However, many of us on the list ARE certifiable.  But that's a different story.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] haversine formula with postgreSQL

2009-09-17 Thread Jonathan Harahush
I do have PostGIS installed and I use it for other things (geoserver), but
I'm not knowledgeable enough about it to the point where I understand how to
get it to work with the Google Maps API.  I'll look into it.  In the
meantime, I was hoping to create something based off of the GMaps/PHP/MySQL
example I referenced in an earlier post since I'm still learning.
The reason why I'm using Postgres is because it's installed at work.  We
don't use MySQL.

Thanks for all of the help so far!  I appreciate it.

On Thu, Sep 17, 2009 at 5:50 PM, Brent Wood b.w...@niwa.co.nz wrote:

 A bit out in left field,

 Writing your own haversine in Postgres seems a bit like reinventing a
 wooden wheel when you gan get a free pneumatic one...

 Any reason not to just install PostGIS  fully support geometries 
 projections in Postgres?

 You can build the geometries provided to the functions on the fly from
 lat/lon coordinates stored as numerics in your SQL, so your DB structures
 don't even have to change if you don't want them to..

 http://www.postgis.org/documentation/manual-1.4/ST_Distance_Sphere.html
 http://www.postgis.org/documentation/manual-1.4/ST_Distance_Spheroid.html


 HTH

   Brent Wood


 Brent Wood
 DBA/GIS consultant
 NIWA, Wellington
 New Zealand
  Scott Marlowe scott.marl...@gmail.com 09/18/09 11:35 AM 
 On Thu, Sep 17, 2009 at 1:16 PM, Jonathan jharah...@gmail.com wrote:
  Hi!
 
  I am looking at the PHP/MySQL Google Maps API store locator example
  here:
 
  http://code.google.com/apis/maps/articles/phpsqlsearch.html
 
  And I'm trying to get this to work with PostgreSQL instead of MySQL.
 
  I've (slightly) modified the haversine formula part of my PHP script
  but I keep getting this error:
 
  Invalid query: ERROR: column distance does not exist LINE
  1: ...ude ) ) ) ) AS distance FROM aaafacilities HAVING distance ...
  ^
 
  I'm new to this, but it doesn't look like I need to create a column in
  my table for distance, or at least the directions didn't say to create
  a distance column.
 
  Here is my PHP with SQL:
  $query = sprintf(SELECT 'ID', 'FACILITY', 'ADDRESS', latitude,
  longitude, ( 3959 * acos( cos( radians('%s') ) * cos( radians
  ( latitude ) ) * cos( radians( longitude ) - radians('%s') ) + sin
  ( radians('%s') ) * sin( radians( latitude ) ) ) ) AS distance FROM
  aaafacilities HAVING distance  '%s' ORDER BY dist LIMIT 0 OFFSET 20,
   pg_escape_string($center_lat),
   pg_escape_string($center_lng),
   pg_escape_string($center_lat),
   pg_escape_string($radius));
 
  Does anyone have any ideas on how I can get this to work?  I'm not
  sure what is wrong, since it doesn't seem like I need to create a
  distance column and when I do create one, I get this:

 Is that really the whole query?  Why a having with no group by?

 Can you do me a favor and print out $query instead of the php stuff?
 It might help you as well to troubleshoot to see the real query.

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

 NIWA is the trading name of the National Institute of Water  Atmospheric
 Research Ltd.



Re: [GENERAL] haversine formula with postgreSQL

2009-09-17 Thread Scott Marlowe
On Thu, Sep 17, 2009 at 8:37 PM, Jonathan Harahush jharah...@gmail.com wrote:
 I do have PostGIS installed and I use it for other things (geoserver), but
 I'm not knowledgeable enough about it to the point where I understand how to
 get it to work with the Google Maps API.  I'll look into it.  In the
 meantime, I was hoping to create something based off of the GMaps/PHP/MySQL
 example I referenced in an earlier post since I'm still learning.
 The reason why I'm using Postgres is because it's installed at work.  We
 don't use MySQL.

 Thanks for all of the help so far!  I appreciate it.

The good news is that while postgresql is more persnickity about SQL
standards and it feels like you're dealing with a harsh task master,
the lessons learned are good ones.  They'll translate to other
database engines, and in the future when mysql grows more standards
compliant too.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL nonstandard use of escape in a string literal

2009-09-17 Thread Johnson, Trevor
Using Moodle with PostgreSQL 8.4 and we get warning messages...

2009-09-18 13:48:11 ESTWARNING:  nonstandard use of escape in a string
literal at character 209

2009-09-18 13:48:11 ESTHINT:  Use the escape string syntax for escapes,
e.g., E'\r\n'.

 

standard_conforming_strings is set to off, if we turn it on it breaks
Moodle.

 

Are these just warnings or is there something we need to fix?

If so is it okay to turn the warnings with escape_string_warning = off ?

 

 

Regards

 

Trevor

Trevor Johnson
Applications and Database Administrator | Information Management
Services Division | TAFE NSW Riverina Institute
Ph 02 69381351 | Fax 02 69381432 | Mob 0418 600606 | email:
trevor.john...@det.nsw.edu.au mailto:trevor.john...@det.nsw.edu.au  

 


**
This message is intended for the addressee named and may contain
privileged information or confidential information or both. If you
are not the intended recipient please delete it and notify the sender.
**


Re: [GENERAL] PostgreSQL nonstandard use of escape in a string literal

2009-09-17 Thread Jeff Davis
On Fri, 2009-09-18 at 14:05 +1000, Johnson, Trevor wrote:
 Are these just warnings or is there something we need to fix?

They are just warnings. The application is apparently written assuming
the non-standard quoting for string literals.

 If so is it okay to turn the warnings with escape_string_warning =
 off ?

I recommend you inform the authors of the application that they should
update it to use standard-conforming string literals. Then, you can turn
escape_string_warning = off to suppress the warnings while you are
waiting for them to fix it.

Regards,
Jeff Davis



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL nonstandard use of escape in a string literal

2009-09-17 Thread Peter Eisentraut
On fre, 2009-09-18 at 14:05 +1000, Johnson, Trevor wrote:
 Using Moodle with PostgreSQL 8.4 and we get warning messages...
 
 2009-09-18 13:48:11 ESTWARNING:  nonstandard use of escape in a string
 literal at character 209
 
 2009-09-18 13:48:11 ESTHINT:  Use the escape string syntax for
 escapes, e.g., E'\r\n'.

 “standard_conforming_strings” is set to off, if we turn it on it
 breaks Moodle.

 Are these just warnings or is there something we need to fix?

Well, in your case they are apparently estwarnings because you didn't
put a space at the end of log_line_prefix ;-), but really they are just
warnings.  In the long run, you will want to switch to doing what the
hint, er, esthint says, after which setting standard_conforming_strings
to on becomes OK.  Moodle will need to do the same first as well.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general