Re: [GENERAL] Stored Procedures and Functions

2007-06-04 Thread Albe Laurenz
Harpreet Dhaliwal wrote:

 Is it true that postgres doesn't have a notion of Stored 
 Procedures and functions is what it has instead?
 RDBMS like Sql Server supports both stored procedures and functions. 
 So I was wondering what is the difference between a Stored 
 Procedure and a function. 

I think that your questions have not been answered yet.

Yes, it is true, PostgreSQL doesn't have procedures, only functions.

The difference between a function and a procedure is that the former
has a return value, while the latter does not. Procdures can hand back
results via output parameters.

The lack of procedures in PostgreSQL is mitigated by the fact that you
can achieve everything you need with a function:

- If you don't need to return results at all, you define a function
  with return type void (which means that nothing is returned).

- If you need to return more than one result, you can define a
  function with a composite return type (or equivalently with
  what PostgreSQL calls output parameters).

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] changing 'mons' in interval?

2007-06-04 Thread Klint Gore
Is there a way to change mons in interval::text to the full word months
without resorting to replace(aninterval::text,'mon','Month')?  If it
can handle locales as well that would be good (but I could live without
it).

klint.

+---+-+
: Klint Gore: Non rhyming:
: EMail   : [EMAIL PROTECTED]   :  slang - the:
: Snail   : A.B.R.I.:  possibilities  :
: Mail  University of New England   :  are useless   :
:   Armidale NSW 2351 Australia : L.J.J.  :
: Fax : +61 2 6772 5376 : :
+---+-+

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Tablespaces

2007-06-04 Thread John Gardner
I've been testing one of our apps on PostgreSQL for the last few months
and I'm about ready to put it on the production server, but I need
advice on where to locate the tablespace.  I've been so concerned
getting the app working, I haven't even considered this yet.

I'm using a RPM install of Postgres, so the data directory is located at
/var/lib/pgsql/data/.  Shall I just create a directory under here and
point the tablespace to there?  Any advice would be appreciated.

Thanks in advance

John

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [HACKERS] table partitioning pl/pgsql helpers

2007-06-04 Thread Enrico Sirola


Il giorno 02/giu/07, alle ore 00:53, Jim Nasby ha scritto:

Dropping -hackers; that list is for development of the database  
engine itself.


ok, sorry
e.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Tablespaces

2007-06-04 Thread Richard Huxton

John Gardner wrote:

I've been testing one of our apps on PostgreSQL for the last few months
and I'm about ready to put it on the production server, but I need
advice on where to locate the tablespace.  I've been so concerned
getting the app working, I haven't even considered this yet.

I'm using a RPM install of Postgres, so the data directory is located at
/var/lib/pgsql/data/.  Shall I just create a directory under here and
point the tablespace to there?  Any advice would be appreciated.


If you're not going to be spreading your installation over multiple 
disks (well, mount-points), there's no need to play with tablespaces at all.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] simple select question

2007-06-04 Thread Erol KAHRAMAN

hi guys,

i am newbie in postgresql.  I need some help; i am trying to write like
this:

select * from TABLE where IN ('value1','valeue2',)
but is it possible to give values from file.

select * from TABLE where IN file
--
Erol KAHRAMAN
System Network Administrator


Re: [GENERAL] table partitioning pl/pgsql helpers

2007-06-04 Thread Enrico Sirola

Hi Robert,

Il giorno 01/giu/07, alle ore 04:08, Robert Treat ha scritto:

[...]

We I set these up for our clients, I typically seperate the  
partition creation

piece from the data insertion piece.  (Mostly as partition creation,
especially with rules, is a table locking event, which is better  
done in a
non-critical path).  If you really must do it all in one go, you'll  
have a


I see, and now I agree with you it's better to decouple partition  
creation

and data insertion.

much better chance of accomplishing this using all triggers (and  
triggers are
better anyway)... i think you could do it with a combination of  
rules and a
trigger (on insert to parent, create new parition and insert into  
it and
delete from parent) but it would certainly need testing to make  
sure you dont
have multi-rule evaluation... course since your making a trigger  
anyway...


Even more importantly, I just discovered (trying and then reading  
pgsql docs) that the rule
system is completely bypassed by the COPY FROM statement, so I think  
I'll rewrite
everything using some sort of trigger-generating procedure because I  
want this stuff

to work transparently (and we do lots of copy from).
Thanks for the advices,

Enrico Sirola
[EMAIL PROTECTED]





Re: [GENERAL] simple select question

2007-06-04 Thread Richard Huxton

Erol KAHRAMAN wrote:

hi guys,

i am newbie in postgresql.  I need some help; i am trying to write like
this:

select * from TABLE where IN ('value1','valeue2',)


You'll need to provide a column-name:
  ... WHERE mycolumn IN (...)


but is it possible to give values from file.

select * from TABLE where IN file


Not directly. You'd normally handle this in whatever language you are 
using to query the database.


If you have a lot of values, you might find it useful to read them into 
a temporary table.

CREATE TEMP TABLE my_values (...);
COPY my_values ... FROM filename;
ANALYSE my_values;
SELECT * FROM main_table JOIN my_values ON main_table_column = 
my_values_column


Of course, that assumes you have your values one per line - see the 
manuals for details of what COPY can handle.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] simple select question

2007-06-04 Thread Ragnar
On mán, 2007-06-04 at 12:12 +0300, Erol KAHRAMAN wrote:
 hi guys,
 
 i am newbie in postgresql.  I need some help; i am trying to write
 like this:
 
 select * from TABLE where IN ('value1','valeue2',)

... WHERE what IN (...) ?

 but is it possible to give values from file.
 
 select * from TABLE where IN file

not really. you'd have to import your file
into a table first, possibly with COPY.

gnari



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] changing 'mons' in interval?

2007-06-04 Thread Martijn van Oosterhout
On Mon, Jun 04, 2007 at 06:51:37PM +1000, Klint Gore wrote:
 Is there a way to change mons in interval::text to the full word months
 without resorting to replace(aninterval::text,'mon','Month')?  If it
 can handle locales as well that would be good (but I could live without
 it).

Have you considered using to_char to get the output in the exact format
you want?

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Numeric performances

2007-06-04 Thread Lincoln Yeoh

At 01:42 AM 6/1/2007, Alvaro Herrera wrote:

Vincenzo Romano escribió:
 Hi all.
 I'd like to know whether there is any real world evaluation (aka test) on
 performances of the NUMERIC data type when compared to FLOAT8 and FLOAT4.
 The documentation simply says that the former 
is much slower than the latter

 ones.

It is.  But why do you care?  You either have the correctness that
NUMERIC gives, or you don't.


I suspect it's still useful to know what order of 
magnitude slower it is. After all if it is 1000x 
slower (not saying it is), some people may decide 
it's not worth it or roll their own.


Any hints/gotchas for/when doing such performance tests?

Regards,
Link.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Tablespaces

2007-06-04 Thread Albe Laurenz
 I've been testing one of our apps on PostgreSQL for the last few
months
 and I'm about ready to put it on the production server, but I need
 advice on where to locate the tablespace.  I've been so concerned
 getting the app working, I haven't even considered this yet.
 
 I'm using a RPM install of Postgres, so the data directory is located
at
 /var/lib/pgsql/data/.  Shall I just create a directory under here and
 point the tablespace to there?  Any advice would be appreciated.

You can create it wherever you want as long as the database server has
permissions to access it.
You'll have to specify the full path in the CREATE TABLESPACE statement.

Are you sure that you need a tablespace at all?

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] warm standby server stops doingcheckpointsafterawhile

2007-06-04 Thread Teodor Sigaev

2007-06-01 23:00:00.001 CEST:% LOG:  GIN incomplete splits=8


Just to be sure: patch fixes *creating* of WAL log, not replaying. So, primary 
db should be patched too.


During weekend I found possible deadlock in locking protocol in GIN between 
concurrent UPDATE and VACUUM queries with the same GIN index involved. Strange, 
but I didn't see it in 8.2 and even now I can't reproduce it. It's easy to 
reproduce оnly on HEAD with recently added ReadBufferWithStrategy() call instead 
of ReadBuffer(). ReadBufferWithStrategy() call was added to implement 
limited-size ring of buffers for VACUUM. Nevertheless, it's a possible 
scenario in 8.2.


Attached patch fixes that deadlock bug too. And, previous version of my patch 
has a mistake which is observable on CREATE INDEX .. USING GIN query.


--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/


patch_wal_gin.v6.gz
Description: Unix tar archive

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Numeric performances

2007-06-04 Thread PFC

It is.  But why do you care?  You either have the correctness that
NUMERIC gives, or you don't.


I suspect it's still useful to know what order of magnitude slower it  
is. After all if it is 1000x slower (not saying it is), some people may  
decide it's not worth it or roll their own.


Any hints/gotchas for/when doing such performance tests?



forum_bench= CREATE TEMPORARY TABLE test AS SELECT a::FLOAT AS f,  
a::NUMERIC AS n, a::INTEGER AS i, a::BIGINT AS b FROM  
generate_series( 1,10 ) AS a;

SELECT
Temps : 1169,125 ms

forum_bench= SELECT sum(i) FROM test;
Temps : 46,589 ms

forum_bench= SELECT sum(b) FROM test;
Temps : 157,018 ms

forum_bench= SELECT sum(f) FROM test;
Temps : 63,865 ms

forum_bench= SELECT sum(n) FROM test;
Temps : 124,816 ms

SELECT * FROM test ORDER BY i LIMIT 1; SELECT * FROM test ORDER BY b LIMIT  
1; SELECT * FROM test ORDER BY f LIMIT 1; SELECT * FROM test ORDER BY n  
LIMIT 1;

Temps : 68,996 ms
Temps : 68,917 ms
Temps : 62,321 ms
Temps : 71,880 ms

BEGIN; CREATE INDEX test_i ON test(i); CREATE INDEX test_b ON test(b);  
CREATE INDEX test_f ON test(f); CREATE INDEX test_n ON test(n); ROLLBACK;

CREATE INDEX
Temps : 102,901 ms
CREATE INDEX
Temps : 123,406 ms
CREATE INDEX
Temps : 105,255 ms
CREATE INDEX
Temps : 134,468 ms



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] warm standby server stops doingcheckpointsafterawhile

2007-06-04 Thread Teodor Sigaev

1. After a certain point, consecutive GIN index splits cause a problem.
The new RHS block numbers are consecutive from 111780+

That's newly created page. Splitted page might have any number



2. The incomplete splits stay around indefinitely after creation and we
aren't trying to remove the wrong split at any point. We're either never
creating an xlog record, or we are ignoring it in recovery, or we are
somehow making multiple entries then not removing all of them.

Agreed



3. The root seems to move, which isn't what I personally was expecting
to see. It seems root refers to the highest parent involved in the
split.
root in this context means parent of splitted page. Actually, there is a lot of 
B-tree in GIN, see http://www.sigaev.ru/gin/GinStructure.pdf


4. We're writing lots of redo in between failed page splits. So *almost*
everything is working correctly.

5. This starts to happen when we have very large indexes. This may be
coincidental but the first relation file is fairly full (900+ MB).


Yes. It seems to me that conditions of error are very rare and B-tree over 
ItemPointers (second level of GIN) has a big capacity, 1000+ items per page. So, 
splits occur rather rare.




--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Numeric performances

2007-06-04 Thread Vincenzo Romano
Hmmm ...

It sounds quite strange to me that numeric is faster than bigint.

Even if bigint didn't get hw support in the CPU it should have been
faster that numeric as it should be mapped in 2 32-bits integers.

Numeric algorithms should be linear (according to the number of digits) in
complexity when compared to float, float8, integer and bigint (that should be
constant in my mind).

Nonetheless the suggested fast test makes some sense in my mind.

On Monday 04 June 2007 12:06:47 PFC wrote:
  It is.  But why do you care?  You either have the correctness that
  NUMERIC gives, or you don't.
 
  I suspect it's still useful to know what order of magnitude slower it
  is. After all if it is 1000x slower (not saying it is), some people may
  decide it's not worth it or roll their own.
 
  Any hints/gotchas for/when doing such performance tests?

 forum_bench= CREATE TEMPORARY TABLE test AS SELECT a::FLOAT AS f,
 a::NUMERIC AS n, a::INTEGER AS i, a::BIGINT AS b FROM
 generate_series( 1,10 ) AS a;
 SELECT
 Temps : 1169,125 ms

 forum_bench= SELECT sum(i) FROM test;
 Temps : 46,589 ms

 forum_bench= SELECT sum(b) FROM test;
 Temps : 157,018 ms

 forum_bench= SELECT sum(f) FROM test;
 Temps : 63,865 ms

 forum_bench= SELECT sum(n) FROM test;
 Temps : 124,816 ms

 SELECT * FROM test ORDER BY i LIMIT 1; SELECT * FROM test ORDER BY b LIMIT
 1; SELECT * FROM test ORDER BY f LIMIT 1; SELECT * FROM test ORDER BY n
 LIMIT 1;
 Temps : 68,996 ms
 Temps : 68,917 ms
 Temps : 62,321 ms
 Temps : 71,880 ms

 BEGIN; CREATE INDEX test_i ON test(i); CREATE INDEX test_b ON test(b);
 CREATE INDEX test_f ON test(f); CREATE INDEX test_n ON test(n); ROLLBACK;
 CREATE INDEX
 Temps : 102,901 ms
 CREATE INDEX
 Temps : 123,406 ms
 CREATE INDEX
 Temps : 105,255 ms
 CREATE INDEX
 Temps : 134,468 ms


-- 
Vincenzo Romano
--
Maybe Computer will never become as intelligent as Humans.
For sure they won't ever become so stupid. [VR-1988]

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] why postgresql over other RDBMS

2007-06-04 Thread Alban Hertroys
Tom Lane wrote:
 Right.  Multiple seqscans that are anywhere near reading the same block
 of a table will tend to self-synchronize.  There is a patch under
 consideration for 8.3 that helps this along by making seqscans run
 circularly --- that is, not always from block 0 to block N, but from
 block M to N and then 0 to M-1, where the start point M can be chosen
 by looking to see where any other concurrent seqscan is presently
 reading.  Once you've got a reasonable start point, you don't have to do
 anything else.
 
   regards, tom lane

Interesting concept (as expected from you guys).

Would that imply that the sequential scan of one connection could place
data into the disk cache that another parallel seq scan would need soon?
Would that speed up parallel seq scans? Or am I being optimistic here.

Regards,
-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] PITR Base Backup on an idle 8.1 server

2007-06-04 Thread Marco Colombo

Greg Smith wrote:
The way you're grabbing 
files directly from the xlog directory only works because your commit 
workload is so trivial that you can get away with it, and because you 
haven't then tried to apply future archive logs.


Well, it's only because I don't need future logs, just like I don't need 
future files. Backup is at 2:00 AM, any change after that is 
potentially lost. That includes e-mails, web contents, and database 
contents. The database contents are in no way different to us.


It's the your commit workload is so trivial that you can get away with 
it I don't really get, but more on this later.


In the general case, 
circumventing the archiving when the backup is going on won't guarantee 
everything is ordered just right for PITR to work correctly.


Generic PITR? You mean if backup is at 2:00 AM and the server crashes 
(all disks lost) at 2:00 PM, you want to be able to recover to some 
time like 11:00 AM, and be precise about it? That's PITR to me - and the 
precise part is key here... either the time or the transaction ID 
would do, the point is being able to draw a line and say anything 
before this is correct.


Well if that's what you mean by PITR, I never claimed my method would 
give you that ability. I'm pretty aware it won't do, in the general 
case. If you need that, you need to archive all the logs created after 
the backup, that's pretty obvious.


But even under heavy write load, my method works, if the only point in 
time you want to be able to recover is 2:00AM.


It works for you too, it gives you nice working backup. If you also need 
real PITR, your archive_commmand is going to be something like:


archive_command = 'test ! -f /var/lib/pgsql/backup_lock  cp %p 
/my_archive_dir/%f'


I consider 
what you're doing a bad idea that you happen to be comfortable with the 
ramifications of, and given the circumstances I understand how you have 
ended up with that solution.


I would highly recommend you consider switching at some point to the 
solution Simon threw out:



create table xlog_switch as
select '0123456789ABCDE' from generate_series(1,100);
drop table xlog_switch;


Ok, now the segment gets rotated, and a copy of the file appears 
somewhere. What's the difference in having the archive_command store it 
or your backup procedure store it?


Let's say my archive_command it's a cp to another directory, and let's 
say step 5) is a cp too. What exaclty buys me to force a segment switch 
with dummy data instead of doing a cp myself on the real segment data?


I mean, both ways would do.


you should reconsider doing your PITR backup
properly--where you never touch anything in the xlog directory and 
instead only work with what the archive_command is told.


Well, I'm copying files. That's exaclty what a typical archive_command 
does. It's no special in any way, just a cp (or tar or rsync or 
whatever). Unless you mean I'm not supposed to copy a partially filled 
segment. There can be only one, the others would be full ones, and full 
ones are no problem. I think PG correctly handles the partial one if I 
drop it in pg_xlog at recover time.


That segment you need to treat specially at recover time, if you use my 
procedure (in my case, I don't). If you have a later copy if it (most 
likely an archived one), you have to make it avalable to PG instead of 
the old one, if you want to make use of the rest of the archived 
segments. If you don't want to care about this, then I agree your method 
of forcing a segment switch is simpler. There's not partial segment at 
all. Anyway, it's running a psql -c at backup time vs. a test -nt  
rm at restore time, not a big deal in either case.


.TM.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Jumping Weekends

2007-06-04 Thread Ranieri Mazili

Hello,
(sorry for my poor english)

It's my first post here, and my doubt is very simple (I guess). I have a 
function to populate a table, into WHILE I have the follow piece of code:


--Jump Weekend
IF (SELECT TO_CHAR(CAST(PRODUCTION_DATE as date),'Day')) = 'Saturday' THEN
  PRODUCTION_DATE := PRODUCTION_DATE + 2;
END IF;

It's to jump weekend adding 2 days on the variable PRODUCTION_DATE, 
the type of the variable is DATE. But, I don't know why, it's doesn't 
work properly, it's doesn't jump weekends, it's continues normally, 
someone knows what's happen or what am I doing wrong?


I appreciate any help.
Thanks

Ranieri Mazili




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Numeric performances

2007-06-04 Thread PFC


This is a 32 bit CPU by the way.

Consider this :

- There are 100K rows
- The CPU executes about 3 billion instructions per second if everything  
is optimum

- SELECT sum(n) FROM test, takes, say 60 ms

This gives about 1800 CPU ops per row.
A Float addition versus an Int addition is a drop in the sea.

	I believe the marked difference between Floats/Ints (faster) and  
Numeric/Bigint (slower) comes from being passed by value or by pointers.


	A single access which misses the CPU cache and has to go fetch data from  
the real RAM spends a lot more cycles than the simple loops in a NUMERIC  
addition which will hit L1 cache.
	Nowadays cache access patterns matter more than how many actual CPU  
instructions are executed...



forum_bench= SELECT sum(i) FROM test;
Temps : 46,589 ms
forum_bench= SELECT sum(f) FROM test;
Temps : 63,865 ms



forum_bench= SELECT sum(b) FROM test;
Temps : 157,018 ms
forum_bench= SELECT sum(n) FROM test;
Temps : 124,816 ms


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] multimaster

2007-06-04 Thread Tino Wildenhain

Alexander Staubo schrieb:

On 6/1/07, Andrew Sullivan [EMAIL PROTECTED] wrote:

These are all different solutions to different problems, so it's not
surprising that they look different.  This was the reason I asked,
What is the problem you are trying to solve?


You mean aside from the obvious one, scalability?

The databases is becoming a bottleneck for a lot of so-called Web
2.0 apps which use a shared-nothing architecture (such as Rails,
Django or PHP) in conjunction with a database. Lots of ad-hoc database
queries that come not just from web hits but also from somewhat
awkwardly fitting an object model onto a relational database.


...


the single server, but I would hope that there would, at some point,
appear a solution that could enable a database to scale horizontally
with minimal impact on the application. In light of this need, I think
we could be more productive by rephrasing the question how/when we
can implement multimaster replication? as how/when can we implement
horizontal scaling?.

As it stands today, horizontally partitioning a database into multiple
separate shards is incredibly invasive on the application
architecture, and typically relies on brittle and non-obvious hacks
such as configuring sequence generators with staggered starting
numbers, omitting referential integrity constraints, sacrificing
transactional semantics, and moving query aggregation into the app
level. On top of this, dumb caches such as Memcached are typically


Did you have a look at BizgresMPP?

Especially for your shared-nothing approach it seems to be a better
solution then just replicating everything.

Regards
Tino

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] debugging C functions

2007-06-04 Thread Gregory Stark
Islam Hegazy [EMAIL PROTECTED] writes:

 I do the same but I use the ddd debugger
 1) Load the shared library from the SQL
 2) Open the .c file of my function
 3) Place the break points
 4) Execute the sql statement 'Select * from Myfn(...);'

 The result is displayed and the debugger doesn't stop at the breakpoints.

Are you sure you're attaching to the right process?

One way to do it is to run select pg_backend_pid() from psql and attach to
that pid.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Corruption of files in PostgreSQL

2007-06-04 Thread Franz . Rasper
What OS are you running ?

Linux(32 or 64 Bit)? Ext 3 Filesystem ? Wich Kernel Version ?
Bug in Ext 3/Linux Kernel/Hardware(Raid Controller ?) ?

Does the error only happens under heavy load ?

regards,

-Franz

-Ursprüngliche Nachricht-
Von: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Im Auftrag von Paolo Bizzarri
Gesendet: Samstag, 2. Juni 2007 07:46
An: Purusothaman A
Cc: Richard Huxton; pgsql-general@postgresql.org
Betreff: Re: [GENERAL] Corruption of files in PostgreSQL


Hi everyone,

a little update.

We have upgraded our system to 7.4.17. The problem of truncated files
seems now better, but it is still present. We have not found a clearly
understandable pattern on why this happens.

Just to provide some further information:

- we create a file and store on the DB;

- we give the file to the user, and he can modify at its wish the file;

- we store back the modified file on the DB;

- the last two points can happen several times.

Any hint?

Best regards.

Paolo Bizzarri
Icube S.r.l.



On 5/30/07, Purusothaman A [EMAIL PROTECTED] wrote:
 Paolo Bizzarri,

 I am also using postgresql in my application and also facing file object
 corruption problem.

 I already discussed several times with Richard Huxton, and ended without
any
 clue.

 Here I am briefing my problem, see if u find any clue about it.
 I am storing/retrieving my file in postgresql using lo_export() and
 lo_import() api.

 after few weeks (as application is being used - number of file objects in
 database also grows) my file object gets corrupted. And I have no clue
about
 which causes this problem.

 I confirmed the file corruption by the following query,

 sfrs2= select loid, pageno, length(data) from pg_largeobject where loid =
 101177 and pageno = 630;
   loid  | pageno | length
 ++
  101177 |630 |181
 (1 row)

 But actually the result of the above query before corruption(ie,
immediately
 after file object added to table)

 fasp_test= select loid, pageno, length(data) from pg_largeobject where
loid
 = 106310 and pageno = 630;
   loid  | pageno | length
 ++
  106310 |630 |205
 (1 row)

 I uploaded same file in both(sfrs2, fasp_test) databases. The first one
 result is after the corruption. and the later is before corruption.

 You also confirm you problem like this. And I strongly believe that, there
 is some bug in PostgreSQL.

 Kindly don't forget to alert me once u find solution/cause.

 Regards,
 Purusothaman A


 On 5/30/07, Paolo Bizzarri [EMAIL PROTECTED] wrote:
 
  On 5/30/07, Richard Huxton [EMAIL PROTECTED] wrote:
   Paolo Bizzarri wrote:
We use postgres as a backend, and we are experimenting some
corruption
problems on openoffice files.
  
   1. How are you storing these files?
 
  Files are stored as large objects. They are written with an lo_write
  and its contents is passed as a Binary object.
 
   2. What is the nature of the corruption?
 
  Apparently, files get truncated.
 
As our application is rather complex (it includes Zope as an
application server, OpenOffice as a document server and as a client)
we need some info on how to check that we are interacting correctly
with Postgres.
  
   Shouldn't matter.
 
  I hope so...
 
We are currently using:
   
- PostgreSQL 7.4.8;
  
   Well, you need to upgrade this - version 7.4.17 is the latest in the
7.4
   series. You are missing 9 separate batches of bug and security fixes.
 
  Ok. We will upgrade and see if this can help solve the problem.
 
  
- pyscopg 1.1.11 ;
- Zope 2.7.x;
- Openoffice 2.2.
  
   None of this should matter really, unless there's some subtle bug in
   psycopg causing corruption of data in-transit.
  
   Let's get some details on the two questions above and see if there's a
   pattern to your problems.
 
  Ok. Thank you.
 
  Paolo Bizzarri
  Icube S.r.l.
 
  ---(end of
 broadcast)---
  TIP 5: don't forget to increase your free space map settings
 



 --
 http://PurusothamanA.wordpress.com/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] warm standby server stops doingcheckpointsafterawhile

2007-06-04 Thread Teodor Sigaev

Ooops. Patch doesn't apply cleanly. New version.


Attached patch fixes that deadlock bug too. And, previous version of my 
patch has a mistake which is observable on CREATE INDEX .. USING GIN query.


--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/


patch_wal_gin.v7.gz
Description: Unix tar archive

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Numeric performances

2007-06-04 Thread Vincenzo Romano
I'm getting more confused.

If the algorithm used to do the sum is a drop in the sea,
then the resources needed to pass a pointer on the stack are
a molecule in the drop! :-)

Nonetheless I think that your directions are right:
doing actual queries instead of inspecting the algorithms themselves
should yeld numbers that are (by definition) coherent with real world usage!

Another point is related to storage.
I think that as far as the storage for a numeric is within few bytes, the
difference should be related only to the algorithm.
But with larger size, you have no option with floats!

So, finally, the question should have been:

When used in the same ranges as FLOAT8 or FLOAT, what'd be the performance 
impact of NUMERIC?

Sorry for having been unclear. And thanks for the hints.

On Monday 04 June 2007 13:17:49 PFC wrote:
   This is a 32 bit CPU by the way.

   Consider this :

 - There are 100K rows
 - The CPU executes about 3 billion instructions per second if everything
 is optimum
 - SELECT sum(n) FROM test, takes, say 60 ms

   This gives about 1800 CPU ops per row.
   A Float addition versus an Int addition is a drop in the sea.

   I believe the marked difference between Floats/Ints (faster) and
 Numeric/Bigint (slower) comes from being passed by value or by pointers.

   A single access which misses the CPU cache and has to go fetch data from
 the real RAM spends a lot more cycles than the simple loops in a NUMERIC
 addition which will hit L1 cache.
   Nowadays cache access patterns matter more than how many actual CPU
 instructions are executed...

  forum_bench= SELECT sum(i) FROM test;
  Temps : 46,589 ms
  forum_bench= SELECT sum(f) FROM test;
  Temps : 63,865 ms
 
  forum_bench= SELECT sum(b) FROM test;
  Temps : 157,018 ms
  forum_bench= SELECT sum(n) FROM test;
  Temps : 124,816 ms



-- 
Vincenzo Romano
--
Maybe Computer will never become as intelligent as Humans.
For sure they won't ever become so stupid. [VR-1988]

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] [SQL] Jumping Weekends

2007-06-04 Thread Pavel Stehule

Hello,

you forgot on sunday. Your solution can work, but isn't too efective

you can do:

production_date := production_date +
 CASE extract(dow from production_date)
   WHEN 0 THEN 1 -- sunday
   WHEN 6 THEN 2 -- saturday
   ELSE 0 END;

there isn't slower string comparation and it's one sql statement without two.

Regards
Pavel Stehule

2007/6/4, Ranieri Mazili [EMAIL PROTECTED]:

Hello,
(sorry for my poor english)

It's my first post here, and my doubt is very simple (I guess). I have a
function to populate a table, into WHILE I have the follow piece of code:

--Jump Weekend
IF (SELECT TO_CHAR(CAST(PRODUCTION_DATE as date),'Day')) = 'Saturday' THEN
   PRODUCTION_DATE := PRODUCTION_DATE + 2;
END IF;

It's to jump weekend adding 2 days on the variable PRODUCTION_DATE,
the type of the variable is DATE. But, I don't know why, it's doesn't
work properly, it's doesn't jump weekends, it's continues normally,
someone knows what's happen or what am I doing wrong?

I appreciate any help.
Thanks

Ranieri Mazili




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] multimaster

2007-06-04 Thread Alexander Staubo

On 6/4/07, Tino Wildenhain [EMAIL PROTECTED] wrote:

Did you have a look at BizgresMPP?

Especially for your shared-nothing approach it seems to be a better
solution then just replicating everything.


I had completely forgotten about that one. Bizgres.org seems down at
the moment, but looking at their whitepaper, the architecture looks
similar to that of pgpool-II; from what I can see, you connect through
a proxy that transparently partitions data across multiple PostgreSQL
database instances, and then queries them in parallel and merges the
query results. Looks promising.

Alexander.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Tablespaces

2007-06-04 Thread Ray Stell
On Mon, Jun 04, 2007 at 09:49:03AM +0100, John Gardner wrote:
 /var/lib/pgsql/data/.  Shall I just create a directory under here and
 point the tablespace to there?  Any advice would be appreciated.

One of the points of ts is to balance io over different controllers/disks.
Someone should evalulate the current load and overlay the estimated
load of the new app and see if the current data path will support it.
Include growth estimates for all the apps.  If the load is a concern,
create the ts on a seperate io path.  What are the possible results of
not doing this analysis?

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [Fwd: Re: [GENERAL] Autovacuum keeps vacuuming a table disabled in pg_autovacuum]

2007-06-04 Thread Andrew Sullivan
On Fri, Jun 01, 2007 at 01:27:14PM -0700, Ron St-Pierre wrote:
  imp=# select age(datfrozenxid) from pg_database where datname = 'imp';
  age
  
   1571381411
  (1 row)
 
 Time to start VACUUM FULL ANALYZE over the weekend.

I guess this comes too late, but you don't need VACUUM FULL for that. 
VACUUM FULL _does not_ mean vacuum everything!

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
Unfortunately reformatting the Internet is a little more painful 
than reformatting your hard drive when it gets out of whack.
--Scott Morris

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [Re] Re: [GENERAL] Winsock error 10035 while trying to upgrade from 8.0 to 8.2

2007-06-04 Thread Magnus Hagander
On Tue, May 29, 2007 at 11:25:30PM +0200, Magnus Hagander wrote:
  What do you think ? may be a bug in the windows server installation I 
  have 
  (this machines have not been updated for some times, perhaps I should try 
  to do 
  that and see if the problem is still there. In the long run, I plan to 
  upgrade 
  to windows 2003).
 
 I don't *think* it should be a bug with your version, it doesn't look
 like it. but if you're not on the latest service pack, that's certainly
 possible. Please update to latest servicepack + updates from Windows
 Update / WSUS, and let me know if the problem persists.
 
 Meanwhile, I'll try to cook up a patch.

I have applied a patch for this to HEAD and 8.2. It includes a small wait
so we don't hit it too hard, and a limit on 5 retries before we simply give
up - so we don't end up in an infinite loop.

//Magnus


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] High-availability

2007-06-04 Thread Andrew Sullivan
On Sun, Jun 03, 2007 at 01:35:49PM -0400, Lew wrote:
 How much data do you put in the DB?  Oracle has a free version, but it has 
 size limits.

AFAIK, Oracle's free version doesn't include RAC, which is what would
be needed to satisfy the request anyway.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
Users never remark, Wow, this software may be buggy and hard 
to use, but at least there is a lot of code underneath.
--Damien Katz

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: NULLS and User Input WAS Re: [GENERAL] multimaster

2007-06-04 Thread Andrew Sullivan
On Mon, Jun 04, 2007 at 12:37:42AM +0200, PFC wrote:
 NULL usually means unknown or not applicable

Aaaargh!  No, it doesn't.  It means NULL.  Nothing else.  

If it meant unknown or not applicable or anything else, then 

SELECT * FROM nulltbl a, othernulltbl b
WHERE a.nullcol = b.nullcol

would return rows where a.nullcol contained NULL and b.nullcol
contained NULL.  But it doesn't, because !(NULL = NULL).  

It's too bad indeed that the originators of SQL used three-value
rather than five-value logic, but this is what we have.  If you
happen to want to use NULL to mean something specific in some
context, go ahead, but you shouldn't generalise that to usually
means anything.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
Everything that happens in the world happens at some place.
--Jane Jacobs 

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Corruption of files in PostgreSQL

2007-06-04 Thread Franz . Rasper
If there is any database driver (which was bild with the
old postgresql sources/libs), (re)build this driver with
the new postgresql sources/libs.

Greetings,

-Franz 

-Ursprüngliche Nachricht-
Von: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Im Auftrag von Paolo Bizzarri
Gesendet: Samstag, 2. Juni 2007 07:46
An: Purusothaman A
Cc: Richard Huxton; pgsql-general@postgresql.org
Betreff: Re: [GENERAL] Corruption of files in PostgreSQL


Hi everyone,

a little update.

We have upgraded our system to 7.4.17. The problem of truncated files
seems now better, but it is still present. We have not found a clearly
understandable pattern on why this happens.

Just to provide some further information:

- we create a file and store on the DB;

- we give the file to the user, and he can modify at its wish the file;

- we store back the modified file on the DB;

- the last two points can happen several times.

Any hint?

Best regards.

Paolo Bizzarri
Icube S.r.l.



On 5/30/07, Purusothaman A [EMAIL PROTECTED] wrote:
 Paolo Bizzarri,

 I am also using postgresql in my application and also facing file object
 corruption problem.

 I already discussed several times with Richard Huxton, and ended without
any
 clue.

 Here I am briefing my problem, see if u find any clue about it.
 I am storing/retrieving my file in postgresql using lo_export() and
 lo_import() api.

 after few weeks (as application is being used - number of file objects in
 database also grows) my file object gets corrupted. And I have no clue
about
 which causes this problem.

 I confirmed the file corruption by the following query,

 sfrs2= select loid, pageno, length(data) from pg_largeobject where loid =
 101177 and pageno = 630;
   loid  | pageno | length
 ++
  101177 |630 |181
 (1 row)

 But actually the result of the above query before corruption(ie,
immediately
 after file object added to table)

 fasp_test= select loid, pageno, length(data) from pg_largeobject where
loid
 = 106310 and pageno = 630;
   loid  | pageno | length
 ++
  106310 |630 |205
 (1 row)

 I uploaded same file in both(sfrs2, fasp_test) databases. The first one
 result is after the corruption. and the later is before corruption.

 You also confirm you problem like this. And I strongly believe that, there
 is some bug in PostgreSQL.

 Kindly don't forget to alert me once u find solution/cause.

 Regards,
 Purusothaman A


 On 5/30/07, Paolo Bizzarri [EMAIL PROTECTED] wrote:
 
  On 5/30/07, Richard Huxton [EMAIL PROTECTED] wrote:
   Paolo Bizzarri wrote:
We use postgres as a backend, and we are experimenting some
corruption
problems on openoffice files.
  
   1. How are you storing these files?
 
  Files are stored as large objects. They are written with an lo_write
  and its contents is passed as a Binary object.
 
   2. What is the nature of the corruption?
 
  Apparently, files get truncated.
 
As our application is rather complex (it includes Zope as an
application server, OpenOffice as a document server and as a client)
we need some info on how to check that we are interacting correctly
with Postgres.
  
   Shouldn't matter.
 
  I hope so...
 
We are currently using:
   
- PostgreSQL 7.4.8;
  
   Well, you need to upgrade this - version 7.4.17 is the latest in the
7.4
   series. You are missing 9 separate batches of bug and security fixes.
 
  Ok. We will upgrade and see if this can help solve the problem.
 
  
- pyscopg 1.1.11 ;
- Zope 2.7.x;
- Openoffice 2.2.
  
   None of this should matter really, unless there's some subtle bug in
   psycopg causing corruption of data in-transit.
  
   Let's get some details on the two questions above and see if there's a
   pattern to your problems.
 
  Ok. Thank you.
 
  Paolo Bizzarri
  Icube S.r.l.
 
  ---(end of
 broadcast)---
  TIP 5: don't forget to increase your free space map settings
 



 --
 http://PurusothamanA.wordpress.com/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] High-availability

2007-06-04 Thread Chander Ganesan

Madison Kelly wrote:

Hi all,

  After realizing that 'clustering' in the PgSQL docs means multiple 
DBs behind one server, and NOT multple machines, I am back at square 
one, feeling somewhat the fool. :P


  Can anyone point me to docs/websites that discuss options on 
replicating in (as close as possible to) realtime? Ideally with load 
balancing while both/all servers are up, and failover/resyncing when a 
member fails and is restored.
If you're interested in the less than ideal case (no load balancing, 
but synchronous replication in a warm standby type mode), there are 
several options, such as shared disk (two systems sharing a SAN or NAS 
with heartbeat-style fail over - shared disk scenario), or DRBD (where 
block level changes to one device are mirrored in real-time over to 
another, with heartbeat style fail over - this is a shared nothing 
type scenario).  It's not too hard to put together a warm standby 
synchronous replication mechanism with overhead that isn't too much more 
than what you incur by enabling PITR...  Such systems can also have very 
fast failover on failure detection (via heartbeat2), and be synchronous.


I think you'll typically find that you can get one or the other - 
synchronous replication, or load balancing...but not both.  On the other 
hand, if you were really serious about having close to both, you could 
have a three node setup - two (a provider and subscriber) that run using 
Slony-I (and async replication) and one that runs using one of the 
aforementioned methods (i.e., DRBD and warm-standby synchronous 
replication).  In such cases a failover would mean switching to the 
synchronous replication system.  You should even be able to get SLONY to 
continuing to avail you with load balancing in such a case, without 
having to re-sync - though I haven't tried this myself...  You'd still 
have a potential query that got stale data (when it went to a Slony-I 
subscriber), but you would never lose a committed transaction.  You'd 
have the added benefit of a shared nothing environment as well...


As a side plug, we discuss and implement a few of these options in our 
PostgreSQL performance tuning course.. 
http://www.otg-nc.com/training-courses/coursedetail.php?courseid=47cat_id=8


  Is this even possible on PostgreSQL?

  Being a quite small company, proprietary hardware and fancy software 
licenses are not possible (ie: 'use oracle' won't help).


  I've looked at slony, but it looks more like a way to push 
occasional copies to slaves, and isn't meant to be real time. Am I 
wrong by chance?


  Thanks for any help/tips/pointers!


Chander Ganesan
Open Technology Group, Inc.
One Copley Parkway, Suite 210
Morrisville, NC  27560
Phone: 877-258-8987/919-463-0999
http://www.otg-nc.com
*Expert PostgreSQL Training - On-Site and Public Enrollment*


Madi

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: NULLS and User Input WAS Re: [GENERAL] multimaster

2007-06-04 Thread Ian Harding

On 6/3/07, PFC [EMAIL PROTECTED] wrote:


 Yeah, it is awful ;^)  However the existing system is equally awful
 because there is no way to enter NULL!

Consider this form :

First name :Edgar
Middle name :   J.
Last name : Hoover

Now, if someone has no middle name, like John Smith, should we use NULL
or  for the middle name ?


NMN  for No Middle Name.

http://www.google.com/search?hl=enq=data+standards+no+middle+name+NMNbtnG=Search

The hazard with doing stuff like that is some joker could name their
kid Billy NMN Simpson.  Or this

http://www.snopes.com/autos/law/noplate.asp

If the the None identifier can't be guaranteed to not conflict with
data, the best thing is a boolean for None.


NULL usually means unknown or not applicable, so I believe we have to
use the empty string here. It makes sense to be able to concatenate the
three parts of the name, without having to put COALESCE() everywhere.



Null always means unknown.  N/A usually means Not Applicable.  I use
COALESCE once in a view and never again.


Now consider this form :

City:
State   :
Country :

If the user doesn't live in the US, State makes no sense, so it should
be NULL, not the empty string. There is no unnamed state. Also, if the
user does not enter his city name, this does not mean he lives in a city
whose name is . So NULL should be used, too.



There are states in other countries, but I get your meaning.  But if
someone doesn't enter their middle name, that doesn't mean their
parents named them Billy  Simpson either, right?

I think there is an argument for filling fields with empty strings
where they are _known_ not to exist but they are _applicable_  but I
don't do it.  I prefer the consistency of NULL for absent data versus
WHERE (mname = '' OR mname IS NULL).  Again, the user failing to enter
it when presented an opportunity does not meet the known not to
exist test for me.


It is very context-dependent.



Yeah, unless you are a stubborn old null zealot like me!

- Ian

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: NULLS and User Input WAS Re: [GENERAL] multimaster

2007-06-04 Thread Owen Hartnett

At 12:37 AM +0200 6/4/07, PFC wrote:

Yeah, it is awful ;^)  However the existing system is equally awful
because there is no way to enter NULL!


Consider this form :

First name :Edgar
Middle name :   J.
Last name : Hoover

Now, if someone has no middle name, like John Smith, should we use 
NULL or  for the middle name ?
NULL usually means unknown or not applicable, so I believe we 
have to use the empty string here. It makes sense to be able to 
concatenate the three parts of the name, without having to put 
COALESCE() everywhere.


Now consider this form :

City:
State   :
Country :

If the user doesn't live in the US, State makes no sense, so it 
should be NULL, not the empty string. There is no unnamed state. 
Also, if the user does not enter his city name, this does not mean 
he lives in a city whose name is . So NULL should be used, too.


It is very context-dependent.


My take on the NULL philosophy is that NULL should indicate that no 
data has been entered.  If the data for the record is not applicable, 
then it should have a zero length string, indicating that the field 
has been considered by the user, and that a blank value is 
appropriate.  A NULL field on an entered record should indicate an 
error condition, rather than that the field is not appropriate to the 
context.


Thus, NULL fields on a completed record would mean either that they 
were never presented to the user (thus, did not appear in the UI), or 
an error condition.


The advantages to this is that, if enforced, a count of the non-null 
records will show those operated on by a user, vs. those untouched by 
a user.


-Owen

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] what to do when pg_cancel_backend() doesnt work?

2007-06-04 Thread Rhys Stewart

Well,
that pretty much sums it up.  pg_cancel_backend() is not working. The
query is still there. The box is across the city and the admin is not
in, is there a way to remote restart the server from within PG?

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [Fwd: Re: [GENERAL] Autovacuum keeps vacuuming a table disabled in pg_autovacuum]

2007-06-04 Thread Ron St-Pierre

Andrew Sullivan wrote:

On Fri, Jun 01, 2007 at 01:27:14PM -0700, Ron St-Pierre wrote:
  

 imp=# select age(datfrozenxid) from pg_database where datname = 'imp';
 age
 
  1571381411
 (1 row)

Time to start VACUUM FULL ANALYZE over the weekend.



I guess this comes too late, but you don't need VACUUM FULL for that. 
Yes, I know that VACUUM FULL isn't required here, but because the 
weekend is our slow time on the server I thought that I would perform a 
full vacuum.

VACUUM FULL _does not_ mean vacuum everything!

  
What do you mean by this?  I wanted to do both a VACUUM ANALYZE and a 
VACUUM FULL, so ran VACUUM FULL ANALYZE. Is there something odd about 
VACUUM FULL, other than locking the table it's working on?

A

  



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Numeric performances

2007-06-04 Thread Tom Lane
Vincenzo Romano [EMAIL PROTECTED] writes:
 It sounds quite strange to me that numeric is faster than bigint.

This test is 100% faulty, because it fails to consider the fact that the
accumulator used by sum() isn't necessarily the same type as the input
data.  In fact we sum ints in a bigint and bigints in a numeric to avoid
overflow.

If you try it with max() you'd likely get less-surprising answers.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] High-availability

2007-06-04 Thread Madison Kelly

Chander Ganesan wrote:

Madison Kelly wrote:

Hi all,

  After realizing that 'clustering' in the PgSQL docs means multiple 
DBs behind one server, and NOT multple machines, I am back at square 
one, feeling somewhat the fool. :P


  Can anyone point me to docs/websites that discuss options on 
replicating in (as close as possible to) realtime? Ideally with load 
balancing while both/all servers are up, and failover/resyncing when a 
member fails and is restored.
If you're interested in the less than ideal case (no load balancing, 
but synchronous replication in a warm standby type mode), there are 
several options, such as shared disk (two systems sharing a SAN or NAS 
with heartbeat-style fail over - shared disk scenario), or DRBD (where 
block level changes to one device are mirrored in real-time over to 
another, with heartbeat style fail over - this is a shared nothing 
type scenario).  It's not too hard to put together a warm standby 
synchronous replication mechanism with overhead that isn't too much more 
than what you incur by enabling PITR...  Such systems can also have very 
fast failover on failure detection (via heartbeat2), and be synchronous.


I think you'll typically find that you can get one or the other - 
synchronous replication, or load balancing...but not both.  On the other 
hand, if you were really serious about having close to both, you could 
have a three node setup - two (a provider and subscriber) that run using 
Slony-I (and async replication) and one that runs using one of the 
aforementioned methods (i.e., DRBD and warm-standby synchronous 
replication).  In such cases a failover would mean switching to the 
synchronous replication system.  You should even be able to get SLONY to 
continuing to avail you with load balancing in such a case, without 
having to re-sync - though I haven't tried this myself...  You'd still 
have a potential query that got stale data (when it went to a Slony-I 
subscriber), but you would never lose a committed transaction.  You'd 
have the added benefit of a shared nothing environment as well...


As a side plug, we discuss and implement a few of these options in our 
PostgreSQL performance tuning course.. 
http://www.otg-nc.com/training-courses/coursedetail.php?courseid=47cat_id=8 



  Is this even possible on PostgreSQL?

  Being a quite small company, proprietary hardware and fancy software 
licenses are not possible (ie: 'use oracle' won't help).


  I've looked at slony, but it looks more like a way to push 
occasional copies to slaves, and isn't meant to be real time. Am I 
wrong by chance?


  Thanks for any help/tips/pointers!


Chander Ganesan
Open Technology Group, Inc.
One Copley Parkway, Suite 210
Morrisville, NC  27560
Phone: 877-258-8987/919-463-0999
http://www.otg-nc.com
*Expert PostgreSQL Training - On-Site and Public Enrollment*


Madi

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings




Thank you for your reply!

The more I learn, the more I am leaning towards the DRBD/shared-nothing 
setup. Our loads are not terribly heavy at this point. I hate the idea 
of having a nice server sitting there doing nothing 99% of the time, but 
it looks like the most viable way of setting up HA at this point. Given 
that I am learning as I go, I think the three-way setup you describe 
would be a bit too ambitious for me just now. That said, I do have a 
spare third server that I could use for just such a setup, should I feel 
comfortable enough down the road.


Madi

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: NULLS and User Input WAS Re: [GENERAL] multimaster

2007-06-04 Thread Richard Huxton

Andrew Sullivan wrote:

On Mon, Jun 04, 2007 at 12:37:42AM +0200, PFC wrote:

NULL usually means unknown or not applicable


Aaaargh!  No, it doesn't.  It means NULL.  Nothing else.  

If it meant unknown or not applicable or anything else, then 


SELECT * FROM nulltbl a, othernulltbl b
WHERE a.nullcol = b.nullcol

would return rows where a.nullcol contained NULL and b.nullcol
contained NULL.  But it doesn't, because !(NULL = NULL).  


Well, a strict unknown is fine - so long as it means just that.
  How tall is Andrew? Unknown
  How tall is Richard? Unknown
  Are Andrew and Richard the same height? Unknown

The problem is the slippery-slope from unknown to not applicable to 
user refused to answer to ...whatever



Part of it is the poor support for out-of-band values. In many cases 
what people want is the ability to have a value of type 'number in range 
1-20 or text n/a' and there's not a simple way to provide that, so 
they use null.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: NULLS and User Input WAS Re: [GENERAL] multimaster

2007-06-04 Thread Greg Smith

On Mon, 4 Jun 2007, Ian Harding wrote:


The hazard with doing stuff like that is some joker could name their
kid Billy NMN Simpson.  Or this
http://www.snopes.com/autos/law/noplate.asp


That settles it; I'm getting custom plates with NULL on them just to see 
if it makes it impossible for me to be sent a ticket.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [Fwd: Re: [GENERAL] Autovacuum keeps vacuuming a table disabled in pg_autovacuum]

2007-06-04 Thread Martijn van Oosterhout
On Mon, Jun 04, 2007 at 07:34:13AM -0700, Ron St-Pierre wrote:
 What do you mean by this?  I wanted to do both a VACUUM ANALYZE and a 
 VACUUM FULL, so ran VACUUM FULL ANALYZE. Is there something odd about 
 VACUUM FULL, other than locking the table it's working on?

It tends to bloat indexes. Also, people tend to find that CLUSTER is
faster anyway.

If all you want is to avoid XID wraparound, an ordinary VACUUM will do
fine.

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] High-availability

2007-06-04 Thread Bohdan Linda
On Mon, Jun 04, 2007 at 04:21:32PM +0200, Chander Ganesan wrote:
 I think you'll typically find that you can get one or the other - 
 synchronous replication, or load balancing...but not both.  On the other 

Hi,

I am in very similar position, but I am more failover oriented. I am
considering using pgcluster, which shall resolve both at the cost of
slight transaction overhead. Does anyone have any experience with this?
What problems may I expect in this setup?

Kind regards,
Bohdan 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: NULLS and User Input WAS Re: [GENERAL] multimaster

2007-06-04 Thread Alexander Staubo

On 6/4/07, Andrew Sullivan [EMAIL PROTECTED] wrote:

On Mon, Jun 04, 2007 at 12:37:42AM +0200, PFC wrote:
 NULL usually means unknown or not applicable

Aaaargh!  No, it doesn't.  It means NULL.  Nothing else.

If it meant unknown or not applicable or anything else, then

SELECT * FROM nulltbl a, othernulltbl b
WHERE a.nullcol = b.nullcol

would return rows where a.nullcol contained NULL and b.nullcol
contained NULL.  But it doesn't, because !(NULL = NULL).


I don't disagree with the principle, but that's a specious argument.
Who says (unknown = unknown) should equal true?

Alexander.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] current_date / datetime stuff

2007-06-04 Thread Joshua

Hello,

I was hoping someone here may be able to help me out with this one:

Is there anything similiar to: SELECT current_date;
that will return the date of the first Monday of the month?

Please let me know.

Thanks,
Joshua

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Can someone have a look at my pg_hba.conf file ?

2007-06-04 Thread Steven De Vriendt

Hi,

I'm trying to reach my postgres database via a remote connection. Yet
my connection is refused when I try to do that.
I'm using Ubuntu Feisty
Following lines are now in my pg_hba.conf-file:

# TYPE  DATABASEUSERCIDR-ADDRESS  METHOD

# local is for Unix domain socket connections only
local   all all   ident sameuser
# IPv4 local connections:
host all all 127.0.0.1/32md5
# IPv6 local connections:
hostall all ::1/128   md5

# Connections for all PCs on the subnet
#
# TYPE DATABASE USER IP-ADDRESS IP-MASK METHOD
localall all 127.0.0.1/32 255.255.255.0  trust

When I try the following command:

netstat -nlp | grep 5432

I see my subnet mask isn't included:

[EMAIL PROTECTED]:~# netstat -nlp | grep 5432
tcp0  0 127.0.0.1:5432  0.0.0.0:*
LISTEN 8292/postmaster
unix  2  [ ACC ] STREAM LISTENING 27162
8292/postmaster /var/run/postgresql/.s.PGSQL.5432


Can someone help me out ?

Thanks !

Steven

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: NULLS and User Input WAS Re: [GENERAL] multimaster

2007-06-04 Thread Andrew Sullivan
On Mon, Jun 04, 2007 at 03:38:01PM +0100, Richard Huxton wrote:
 Well, a strict unknown is fine - so long as it means just that.
   How tall is Andrew? Unknown
   How tall is Richard? Unknown
   Are Andrew and Richard the same height? Unknown
 
 The problem is the slippery-slope from unknown to not applicable to 
 user refused to answer to ...whatever

While you do well to point out that I have equivocated on unknown
(in my usual twitchy way whenever NULLs come up, I am told), your
example actually illustrates part of the problem.  There are NULLs
that are actually just local absences of data (you don't know how
tall I am), NULLs that are in fact cases of 'no such data' (the full
name that 'S' stands for in Harry S Truman -- Truman's middle name
was in fact just S), NULLs that are data nobody knows (unlike the
mere locally-unknown data: When the tree fell in the woods with
nobody around to hear it, did it make a sound?), and NULLs that are
the data in response to questions that can't be answered, (What
exists after the end of the universe?)

See, this is what happens when you study the wrong things in school. 
You start to think that logic and metaphysics are somehow related to
one another. :-/

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
The fact that technology doesn't work is no bar to success in the marketplace.
--Philip Greenspun

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] what to do when pg_cancel_backend() doesnt work?

2007-06-04 Thread Alvaro Herrera
Rhys Stewart escribió:
 Well,
 that pretty much sums it up.  pg_cancel_backend() is not working. The
 query is still there. The box is across the city and the admin is not
 in, is there a way to remote restart the server from within PG?

It is probably a bug and if you gives us some information we might be
able to fix it.  For example what is it doing.  And what version it is.

-- 
Alvaro Herrera  Developer, http://www.PostgreSQL.org/
No deja de ser humillante para una persona de ingenio saber
que no hay tonto que no le pueda enseñar algo. (Jean B. Say)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [Fwd: Re: [GENERAL] Autovacuum keeps vacuuming a table disabled in pg_autovacuum]

2007-06-04 Thread Andrew Sullivan
On Mon, Jun 04, 2007 at 07:34:13AM -0700, Ron St-Pierre wrote:
 VACUUM FULL _does not_ mean vacuum everything!

 What do you mean by this? 

Sorry, I was trying to prevent you doing a VACUUM FULL you didn't
want (but another message said you actually intended a vacuum full). 
Several people have been bitten by the misunderstanding that VACUUM
FULL means VACUUM ALL TABLES (e.g. vaccum full database). 

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
I remember when computers were frustrating because they *did* exactly what 
you told them to.  That actually seems sort of quaint now.
--J.D. Baldwin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] what to do when pg_cancel_backend() doesnt work?

2007-06-04 Thread Rhys Stewart

well there is the info below:


GISDEV=# select version();
version
--
PostgreSQL 8.1.5 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC)
3.4.2 (mingw-special)
(1 row)

GISDEV=# select * from pg_catalog.pg_stat_activity;
datid | datname  | procpid | usesysid | usename  |
current_query |query_start |
backend_start|  client_addr   | client_port
---+--+-+--+--+--++++-
19015 | GISDEV   |3584 |17024 | gisadmin | IDLE
 | 2007-06-04 10:04:46.172-04 | 2007-06-04 09:33:18.399-04 |
130.100.30.124 |3491
19015 | GISDEV   |2460 |17024 | gisadmin | IDLE
 | 2007-06-04 09:34:33.07-04  | 2007-06-04 09:34:33.07-04  |
130.100.30.124 |3493
19015 | GISDEV   |4156 |17024 | gisadmin | IDLE
 | 2007-06-04 10:03:40.267-04 | 2007-06-04 09:57:56.005-04 |
130.100.30.124 |3589
19015 | GISDEV   |2960 |17024 | gisadmin | IDLE
 | 2007-06-04 09:34:26.398-04 | 2007-06-01 14:49:20.534-04 |
130.100.30.124 |2874
19015 | GISDEV   |3288 |17024 | gisadmin | drop table tmp.kpsall3buff ;
create table tmp.kpsall3buff with oids as
select pole_id,string,filename,len,buffer(geo,32.5) from tmp.jpsall3

--limit 50 | 2007-06-01 10:20:45.969-04 | 2007-06-01 10:12:51.472-04 |
130.100.30.124 |2130
10793 | postgres | 392 |17024 | gisadmin | IDLE
 | 2007-06-04 09:33:18.837-04 | 2007-06-04 09:33:18.134-04 |
130.100.30.124 |3490
19015 | GISDEV   |1860 |   10 | postgres | IDLE
 | 2007-06-04 11:07:12.874-04 | 2007-06-04 10:31:18.089-04 |
130.100.30.124 |3666
19015 | GISDEV   |5216 |   10 | postgres | IDLE
 | 2007-06-04 11:07:12.843-04 | 2007-06-04 10:34:38.977-04 |
130.100.30.124 |3720
19015 | GISDEV   |5024 |   10 | postgres | IDLE
 | 2007-06-04 11:08:50.685-04 | 2007-06-01 17:36:36.707-04 |
130.100.30.124 |3057
(9 rows)

GISDEV=# select * from pg_catalog.pg_locks ;
  locktype| database | relation | page | tuple | transactionid |
classid | objid | objsubid | transaction | pid  |mode
| granted
---+--+--+--+---+---+-+---+--+-+--+-+-
relation  |19015 |  5220813 |  |   |   |
  |   |  |15820036 | 3288 | AccessExclusiveLock |
t
relation  |19015 |  5223128 |  |   |   |
  |   |  |15820036 | 3288 | ShareLock   |
t
relation  |19015 |  5223126 |  |   |   |
  |   |  |15820036 | 3288 | AccessExclusiveLock |
t
relation  |19015 |  5223130 |  |   |   |
  |   |  |15820036 | 3288 | ShareLock   |
t
relation  |19015 |  5223130 |  |   |   |
  |   |  |15820036 | 3288 | AccessExclusiveLock |
t
transactionid |  |  |  |   |  15822393 |
  |   |  |15822393 | 3564 | ExclusiveLock   |
t
relation  |19015 |  5220811 |  |   |   |
  |   |  |15820036 | 3288 | AccessExclusiveLock |
t
object|0 |  |  |   |   |
 1260 | 17024 |0 |15820036 | 3288 | AccessShareLock |
t
relation  |19015 |  5220811 |  |   |   |
  |   |  |15822393 | 3564 | AccessShareLock |
f
transactionid |  |  |  |   |  15820036 |
  |   |  |15820036 | 3288 | ExclusiveLock   |
t
transactionid |  |  |  |   |  15844904 |
  |   |  |15844904 | 5024 | ExclusiveLock   |
t
relation  |19015 |10342 |  |   |   |
  |   |  |15844904 | 5024 | AccessShareLock |
t
relation  |19015 |  5220815 |  |   |   |
  |   |  |15820036 | 3288 | AccessExclusiveLock |
t
relation  |19015 |  3781129 |  |   |   |
  |   |  |15820036 | 3288 | AccessShareLock |
t
(14 rows)

GISDEV=#

On 6/4/07, Alvaro Herrera [EMAIL PROTECTED] wrote:

Rhys Stewart escribió:
 Well,
 that pretty much sums it up.  pg_cancel_backend() is not working. The
 query is still there. The box is across the city and the admin is not
 in, is there a way to remote restart the server from within PG?

It is probably a bug and if you gives us some information we might be
able to fix it.  For example what is it doing.  And what version it is.

--
Alvaro Herrera   

Re: [GENERAL] what to do when pg_cancel_backend() doesnt work?

2007-06-04 Thread Rhys Stewart

a more readable version
GISDEV=# select version();
 version
  
--
 PostgreSQL 8.1.5 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC) 3.4.2 
(mingw-special)
(1 row)

GISDEV=# select * from pg_catalog.pg_stat_activity;
 datid | datname  | procpid | usesysid | usename  |current_query
 |query_start |   backend_start|  client_addr   | 
client_port 
---+--+-+--+--+--++++-
 19015 | GISDEV   |3584 |17024 | gisadmin | IDLE  
 | 2007-06-04 10:04:46.172-04 | 2007-06-04 09:33:18.399-04 | 130.100.30.124 |   
 3491
 19015 | GISDEV   |2460 |17024 | gisadmin | IDLE  
 | 2007-06-04 09:34:33.07-04  | 2007-06-04 09:34:33.07-04  | 130.100.30.124 |   
 3493
 19015 | GISDEV   |4156 |17024 | gisadmin | IDLE  
 | 2007-06-04 10:03:40.267-04 | 2007-06-04 09:57:56.005-04 | 130.100.30.124 |   
 3589
 19015 | GISDEV   |2960 |17024 | gisadmin | IDLE  
 | 2007-06-04 09:34:26.398-04 | 2007-06-01 14:49:20.534-04 | 130.100.30.124 |   
 2874
 19015 | GISDEV   |3288 |17024 | gisadmin | drop table tmp.kpsall3buff ;
create table tmp.kpsall3buff with oids as
select pole_id,string,filename,len,buffer(geo,32.5) from tmp.jpsall3 

--limit 50 | 2007-06-01 10:20:45.969-04 | 2007-06-01 10:12:51.472-04 | 
130.100.30.124 |2130
 10793 | postgres | 392 |17024 | gisadmin | IDLE  
 | 2007-06-04 09:33:18.837-04 | 2007-06-04 09:33:18.134-04 | 130.100.30.124 |   
 3490
 19015 | GISDEV   |1860 |   10 | postgres | IDLE  
 | 2007-06-04 11:07:12.874-04 | 2007-06-04 10:31:18.089-04 | 130.100.30.124 |   
 3666
 19015 | GISDEV   |5216 |   10 | postgres | IDLE  
 | 2007-06-04 11:07:12.843-04 | 2007-06-04 10:34:38.977-04 | 130.100.30.124 |   
 3720
 19015 | GISDEV   |5024 |   10 | postgres | IDLE  
 | 2007-06-04 11:08:50.685-04 | 2007-06-01 17:36:36.707-04 | 130.100.30.124 |   
 3057
(9 rows)

GISDEV=# select * from pg_catalog.pg_locks ;
   locktype| database | relation | page | tuple | transactionid | classid | 
objid | objsubid | transaction | pid  |mode | granted 
---+--+--+--+---+---+-+---+--+-+--+-+-
 relation  |19015 |  5220813 |  |   |   | | 
  |  |15820036 | 3288 | AccessExclusiveLock | t
 relation  |19015 |  5223128 |  |   |   | | 
  |  |15820036 | 3288 | ShareLock   | t
 relation  |19015 |  5223126 |  |   |   | | 
  |  |15820036 | 3288 | AccessExclusiveLock | t
 relation  |19015 |  5223130 |  |   |   | | 
  |  |15820036 | 3288 | ShareLock   | t
 relation  |19015 |  5223130 |  |   |   | | 
  |  |15820036 | 3288 | AccessExclusiveLock | t
 transactionid |  |  |  |   |  15822393 | | 
  |  |15822393 | 3564 | ExclusiveLock   | t
 relation  |19015 |  5220811 |  |   |   | | 
  |  |15820036 | 3288 | AccessExclusiveLock | t
 object|0 |  |  |   |   |1260 | 
17024 |0 |15820036 | 3288 | AccessShareLock | t
 relation  |19015 |  5220811 |  |   |   | | 
  |  |15822393 | 3564 | AccessShareLock | f
 transactionid |  |  |  |   |  15820036 | | 
  |  |15820036 | 3288 | ExclusiveLock   | t
 transactionid |  |  |  |   |  15844904 | | 
  |  |15844904 | 5024 | ExclusiveLock   | t
 relation  |19015 |10342 |  |   |   | | 
  |  |15844904 | 5024 | AccessShareLock | t
 relation  |19015 |  5220815 |  |   |   | | 
  |  |15820036 | 3288 | AccessExclusiveLock | t
 relation  |19015 |  3781129 |  |   |   | | 
  |  |15820036 | 3288 | AccessShareLock | t
(14 rows)

GISDEV=# 
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] warm standby server stops doingcheckpointsafterawhile

2007-06-04 Thread Frank Wittig
After some observation of massive reindexing of some hundred thousand
data sets it seems to me that the slave doesn't skip checkpoints
anymore. (Apart from those skipped because of the CheckpointTimeout thing)
I'll keep an eye on it and report back any news on the issue.

Thank you for the good work!

Regards,
  Frank Wittig

Teodor Sigaev schrieb:
 Ooops. Patch doesn't apply cleanly. New version.
 
 
 Attached patch fixes that deadlock bug too. And, previous version of
 my patch has a mistake which is observable on CREATE INDEX .. USING
 GIN query.
 



signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Can someone have a look at my pg_hba.conf file ?

2007-06-04 Thread Tom Lane
Steven De Vriendt [EMAIL PROTECTED] writes:
 I'm trying to reach my postgres database via a remote connection. Yet
 my connection is refused when I try to do that.

I think you need to fix listen_addresses, not pg_hba.conf.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] what to do when pg_cancel_backend() doesnt work?

2007-06-04 Thread Alvaro Herrera
Rhys Stewart escribió:
 a more readable version

What is this buffer() function?


-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] warm standby server stops doingcheckpointsafterawhile

2007-06-04 Thread Teodor Sigaev

After some observation of massive reindexing of some hundred thousand
data sets it seems to me that the slave doesn't skip checkpoints
anymore. (Apart from those skipped because of the CheckpointTimeout thing)
I'll keep an eye on it and report back any news on the issue.


Nice, committed. Thank for your report and testing.
--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: NULLS and User Input WAS Re: [GENERAL] multimaster

2007-06-04 Thread PFC



Aaaargh!  No, it doesn't.  It means NULL.  Nothing else.


Well, x = UNKNOWN doesn't make any sense... the answer is UNKNOWN.
	x IS UNKNOWN does make sense, the answer is true or false. Replace  
UNKNOWN with NULL...


	Actually it means what the DBA wants it to mean (which opens the door to  
many a misguided design...)


	I hereby light a candle to the pgsql designers who didn't inflict  
00-00- 00:00:00 upon us besides NULL dates.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Numeric performances

2007-06-04 Thread PFC



If you try it with max() you'd likely get less-surprising answers.


So it was in fact the type conversions that got timed.
Damn. I got outsmarted XDD

Rewind :

CREATE TEMPORARY TABLE test AS SELECT a::FLOAT AS f,  
(a::NUMERIC)*100 AS n, a::INTEGER AS i, a::BIGINT AS b FROM  
generate_series( 1,10 ) AS a;


Max and Sort will use comparisons :

SELECT max(i) FROM test; SELECT max(b) FROM test; SELECT max(f) FROM test;  
SELECT max(n) FROM test;

Temps : 42,132 ms
Temps : 59,499 ms
Temps : 58,808 ms
Temps : 54,197 ms

SELECT * FROM test ORDER BY i LIMIT 1; SELECT * FROM test ORDER BY b LIMIT  
1; SELECT * FROM test ORDER BY f LIMIT 1; SELECT * FROM test ORDER BY n  
LIMIT 1;

Temps : 58,723 ms
Temps : 60,520 ms
Temps : 53,188 ms
Temps : 61,779 ms

SELECT count(*) FROM test a JOIN test b ON (a.i=b.i);
Temps : 275,411 ms

SELECT count(*) FROM test a JOIN test b ON (a.b=b.b);
Temps : 286,132 ms

SELECT count(*) FROM test a JOIN test b ON (a.f=b.f);
Temps : 295,956 ms

SELECT count(*) FROM test a JOIN test b ON (a.n=b.n);
Temps : 321,292 ms

SELECT count(*) FROM test a JOIN test b ON (a.i=b.b);
Temps : 281,162 ms

SELECT count(*) FROM test a JOIN test b ON  
(a.n=b.i::NUMERIC*100);

Temps : 454,706 ms

Now, addition :

SELECT count(i+1) FROM test;
Temps : 46,973 ms

SELECT count(b+1) FROM test;
Temps : 60,027 ms

SELECT count(f+1) FROM test;
Temps : 56,829 ms

SELECT count(n+1) FROM test;
Temps : 103,316 ms

Multiplication :

SELECT count(i*1) FROM test;
Temps : 46,950 ms

SELECT count(b*1) FROM test;
Temps : 58,670 ms

SELECT count(f*1) FROM test;
Temps : 57,058 ms

SELECT count(n*1) FROM test;
Temps : 108,379 ms

SELECT count(i) FROM test;
Temps : 38,351 ms

SELECT count(i/1234) FROM test;
Temps : 48,961 ms

SELECT count(b/1234) FROM test;
Temps : 62,496 ms

SELECT count(n/1234) FROM test;
Temps : 186,674 ms

	Conclusion : numeric seems a bit slower (division being slowest  
obviously) but nothing that can't be swamped by a few disk seeks...





---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] pg_dump: ERROR: could not open relation with OID ...

2007-06-04 Thread Thomas F. O'Connell
During a routine backup procedure (that does not run nightly) for an  
8.2.3 postgres cluster, pg_dump failed:


pg_dump: Error message from server: ERROR:  could not open relation  
with OID ...


In doing some log forensics, I discovered that this error has been  
showing up in the logs intermittently unconnected to pg_dump for the  
past 6 days. It's not occurring at an alarming rate, but the fact  
that it's occurring at all is mildly alarming, and the fact that it's  
preventing backups is even more alarming.


In reviewing the logs, one OID in particular shows up in the vast  
majority of the errors, and it doesn't correspond to any entries I  
can find in pg_class. A handful of other OIDs show up, and a sampling  
of them reveals, too, no entries in pg_class.


The other curious item is that a number of the errors occur in a  
pattern where they precede other error statements and don't seem to  
be directly tied to connections, except during the failed pg_dumps.


The typical pattern in the logs is:

[timestamp] [pid] [remote host/port]:ERROR:  [error message]
[timestamp] [pid] [remote host/port]:STATEMENT: [statement]

With this error, though, the format doesn't include the remote host/ 
port, which makes me wonder if it's occurring as a result of  
autovacuum or other local/internal activity.


Now my thoughts return nervously to a previous thread:

http://archives.postgresql.org/pgsql-general/2007-05/msg01208.php

I didn't think much of it at the time, but now I wonder if it was  
indicative of trouble on the way?


--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005



Re: NULLS and User Input WAS Re: [GENERAL] multimaster

2007-06-04 Thread Zoltan Boszormenyi

Alexander Staubo írta:

On 6/4/07, Andrew Sullivan [EMAIL PROTECTED] wrote:

On Mon, Jun 04, 2007 at 12:37:42AM +0200, PFC wrote:
 NULL usually means unknown or not applicable

Aaaargh!  No, it doesn't.  It means NULL.  Nothing else.

If it meant unknown or not applicable or anything else, then

SELECT * FROM nulltbl a, othernulltbl b
WHERE a.nullcol = b.nullcol

would return rows where a.nullcol contained NULL and b.nullcol
contained NULL.  But it doesn't, because !(NULL = NULL).


I don't disagree with the principle, but that's a specious argument.
Who says (unknown = unknown) should equal true?


NULL means value doesn't exist and for your amusement,
here's an analogy why !(NULL = NULL).
Prove the following statement: every fairy has black hair.
For proving it, let's suppose that there exists a fairy that's hair
isn't black. But fairies don't exist. QED.
Now replace the above statement with another one,
possibly with one that contradicts with the statement above.
Along the same lines, every statements can be proven about
non-existing things, even contradicting ones.

Best regards

--
--
Zoltán Böszörményi
Cybertec Geschwinde  Schönig GmbH
http://www.postgresql.at/


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Moving pg_xlog to another drive

2007-06-04 Thread Ben
I'm trying to move my WAL to another drive, but am having difficulties 
with this seemingly simple process. Every time I start up with pg_xlog 
symlinked to my other drive, I get this:


FATAL:  could not open file pg_xlog/0001.history: Permission denied

If I move pg_xlog back into its normal place then things work fine. The 
postgres user can certainly create files when using the symlink, so I 
really don't think it's a permission issue... but I'm at a loss as to what 
else it might be.


This is on CentOS 5 with the latest RPMs. SELinux is not enforcing (nor 
logging any events when I do this stuff).


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] pg_dump: ERROR: could not open relation with OID ...

2007-06-04 Thread Alvaro Herrera
Thomas F. O'Connell wrote:
 During a routine backup procedure (that does not run nightly) for an  
 8.2.3 postgres cluster, pg_dump failed:
 
 pg_dump: Error message from server: ERROR:  could not open relation  
 with OID ...
 
 In doing some log forensics, I discovered that this error has been  
 showing up in the logs intermittently unconnected to pg_dump for the  
 past 6 days. It's not occurring at an alarming rate, but the fact  
 that it's occurring at all is mildly alarming, and the fact that it's  
 preventing backups is even more alarming.
 
 In reviewing the logs, one OID in particular shows up in the vast  
 majority of the errors, and it doesn't correspond to any entries I  
 can find in pg_class. A handful of other OIDs show up, and a sampling  
 of them reveals, too, no entries in pg_class.

OIDs that show up more than a couple of times are likely to be stored in
a catalog somewhere.  The first place I'd look is pg_depend and
pg_shdepend.  Other places that mention OIDs related to relations are
pg_constraint, pg_rewrite, pg_description, pg_shdescription, pg_trigger,
pg_type, pg_autovacuum; but all of them would most likely be used only
if a pg_class tuple references those, so it's unlikely that it's those
at fault.

-- 
Alvaro Herrera   http://www.PlanetPostgreSQL.org/
No necesitamos banderas
 No reconocemos fronteras  (Jorge González)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Moving pg_xlog to another drive

2007-06-04 Thread Joshua D. Drake

Ben wrote:
I'm trying to move my WAL to another drive, but am having difficulties 
with this seemingly simple process. Every time I start up with pg_xlog 
symlinked to my other drive, I get this:


FATAL:  could not open file pg_xlog/0001.history: Permission denied

If I move pg_xlog back into its normal place then things work fine. The 
postgres user can certainly create files when using the symlink, so I 
really don't think it's a permission issue... but I'm at a loss as to 
what else it might be.


Is the parent directory of the place you are moving pg_xlog to a place 
that the postgres user can enter?




This is on CentOS 5 with the latest RPMs. SELinux is not enforcing (nor 
logging any events when I do this stuff).


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq




--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Moving pg_xlog to another drive

2007-06-04 Thread Alvaro Herrera
Ben wrote:
 I'm trying to move my WAL to another drive, but am having difficulties 
 with this seemingly simple process. Every time I start up with pg_xlog 
 symlinked to my other drive, I get this:
 
 FATAL:  could not open file pg_xlog/0001.history: Permission denied
 
 If I move pg_xlog back into its normal place then things work fine. The 
 postgres user can certainly create files when using the symlink, so I 
 really don't think it's a permission issue... but I'm at a loss as to what 
 else it might be.

Maybe the postgres user does not have all permissions in dirs leading to
the new directory?  You must give it at least x permissions on all
levels up to the parent.

-- 
Alvaro Herrera http://www.flickr.com/photos/alvherre/
Criptografía: Poderosa técnica algorítmica de codificación que es
empleada en la creación de manuales de computadores.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] COPY error

2007-06-04 Thread ABHANG RANE

Hi,
Im trying to load data from a file using copy command. At the end of 
the data, I have appended copy statement as


copy tablename(col1, col2) from stdin with delimiter as '\t';
.\

But I get following error.
ERROR:  syntax error at or near 2713
LINE 1: 2713 {3.70952,1.45728,0.134339,3.99197,2.22381,-0.435095,6.9...
   ^
invalid command \.


Thanks
Abhang



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Moving pg_xlog to another drive

2007-06-04 Thread Ben

On Mon, 4 Jun 2007, Joshua D. Drake wrote:


Ben wrote:
I'm trying to move my WAL to another drive, but am having difficulties with 
this seemingly simple process. Every time I start up with pg_xlog symlinked 
to my other drive, I get this:


FATAL:  could not open file pg_xlog/0001.history: Permission denied

If I move pg_xlog back into its normal place then things work fine. The 
postgres user can certainly create files when using the symlink, so I 
really don't think it's a permission issue... but I'm at a loss as to what 
else it might be.


Is the parent directory of the place you are moving pg_xlog to a place that 
the postgres user can enter?


Sorry, yes, I neglected to mention that the postgres user can enter every 
directory along the path to the new pg_xlog directory. In addition, 
pg_xlog remains owned by postgres.postgres, as does its parent directory, 
and the new pg_xlog directory has permissions of 0700.


So I really can't see why it would be a permission issue.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Moving pg_xlog to another drive

2007-06-04 Thread Steve Atkins


On Jun 4, 2007, at 11:15 AM, Ben wrote:


On Mon, 4 Jun 2007, Joshua D. Drake wrote:


Ben wrote:
I'm trying to move my WAL to another drive, but am having  
difficulties with this seemingly simple process. Every time I  
start up with pg_xlog symlinked to my other drive, I get this:
FATAL:  could not open file pg_xlog/0001.history:  
Permission denied
If I move pg_xlog back into its normal place then things work  
fine. The postgres user can certainly create files when using the  
symlink, so I really don't think it's a permission issue... but  
I'm at a loss as to what else it might be.


Is the parent directory of the place you are moving pg_xlog to a  
place that the postgres user can enter?


Sorry, yes, I neglected to mention that the postgres user can enter  
every directory along the path to the new pg_xlog directory. In  
addition, pg_xlog remains owned by postgres.postgres, as does its  
parent directory, and the new pg_xlog directory has permissions of  
0700.


So I really can't see why it would be a permission issue.


Are you running SELinux? It's main goal in life is to break disk  
access by denying permission to files anywhere other than where it  
thinks an application should be allowed to access.


Cheers,
  Steve


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Encrypted column

2007-06-04 Thread Ranieri Mazili

Hello,

I need to store users and passwords on a table and I want to store it 
encrypted, but I don't found documentation about it, how can I create a 
table with columns user and password with column password 
encrypted and how can I check if user and password are correct using 
a sql query ?


I appreciate any help

Thanks

Ranieri Mazili

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Moving pg_xlog to another drive

2007-06-04 Thread Ben

On Mon, 4 Jun 2007, Steve Atkins wrote:

Are you running SELinux? It's main goal in life is to break disk access by 
denying permission to files anywhere other than where it thinks an 
application should be allowed to access.


Bleh. I am, but I *thought* it was not enforcing. Seems I was wrong. 
Thanks for pointing out the one area I hadn't double-checked. :)


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] [SQL] Encrypted column

2007-06-04 Thread Gary Chambers

I need to store users and passwords on a table and I want to store it
encrypted, but I don't found documentation about it, how can I create a


Take a look at the pgcrypto user-contributed module.

-- Gary Chambers

// Nothing fancy and nothing Microsoft!

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] debugging C functions

2007-06-04 Thread Islam Hegazy
Yes, I am sure. If I placed a breakpoint in any file, e.g. execMain.c, the 
debugger would enter this file.


Islam Hegazy


- Original Message - 
From: Gregory Stark [EMAIL PROTECTED]

To: Islam Hegazy [EMAIL PROTECTED]
Cc: Tom Lane [EMAIL PROTECTED]; Joe Conway [EMAIL PROTECTED]; 
pgsql-general@postgresql.org

Sent: Monday, June 04, 2007 5:50 AM
Subject: Re: [GENERAL] debugging C functions



Islam Hegazy [EMAIL PROTECTED] writes:


I do the same but I use the ddd debugger
1) Load the shared library from the SQL
2) Open the .c file of my function
3) Place the break points
4) Execute the sql statement 'Select * from Myfn(...);'

The result is displayed and the debugger doesn't stop at the breakpoints.


Are you sure you're attaching to the right process?

One way to do it is to run select pg_backend_pid() from psql and attach to
that pid.

--
 Gregory Stark
 EnterpriseDB  http://www.enterprisedb.com




---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Inserting a path into Database

2007-06-04 Thread Ranieri Mazili

Hello,

I need to insert a path into a table, but because \ I have a error by 
postgres, so how can I insert a path like bellow into a table:


insert into production values ('C:\Program Files\My program');

I appreciate any help
Thanks

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] [SQL] Inserting a path into Database

2007-06-04 Thread Shoaib Mir

If you are on 8.1 you can use double qoutes ( 'C:\\Program Files\\My
program' ) on in 8.2 you can use the new backslash_quote (string)
setting.

You can find help on backslash_quote (string) at --
http://www.postgresql.org/docs/current/static/runtime-config-compatible.html

--
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)

On 6/4/07, Ranieri Mazili [EMAIL PROTECTED] wrote:


Hello,

I need to insert a path into a table, but because \ I have a error by
postgres, so how can I insert a path like bellow into a table:

insert into production values ('C:\Program Files\My program');

I appreciate any help
Thanks

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



Re: [GENERAL] Inserting a path into Database

2007-06-04 Thread Mario Guenterberg
On Mon, Jun 04, 2007 at 05:10:48PM -0300, Ranieri Mazili wrote:
  Hello,
 
  I need to insert a path into a table, but because \ I have a error by 
  postgres, so how can I insert a path like bellow into a table:
 
  insert into production values ('C:\Program Files\My program');
 

Use double quotes. In example c:\\Programme Files\\...

Greetings

-- 
 -
| havelsoft.com - Ihr Service Partner für Open Source |
| Tel:  033876-21 966 |
| Notruf: 0173-277 33 60  |
| http://www.havelsoft.com|
| |
| Inhaber: Mario Günterberg   |
| Mützlitzer Strasse 19   |
| 14715 Märkisch Luch |
 -


pgpS7JGvpzTW4.pgp
Description: PGP signature


Re: Partitioning (was Re: [GENERAL] Slightly OT.)

2007-06-04 Thread Jeff Davis
On Fri, 2007-06-01 at 22:13 -0500, Ron Johnson wrote:
 On 06/01/07 19:29, Jeff Davis wrote:
 [snip]
  You shouldn't use a volatile function in a check constraint. Use a
  trigger instead, but even that is unlikely to work for enforcing
  constraints correctly.
  
  In general, for partitioning, you have to make some sacrifices. It's
  very challenging (and/or expensive) to ensure uniqueness across
  partitions.
 
 Are partitioned databases the same as federated databases?
 

I think that usually people refer to a table that is split to be
partitioned (whether across servers or within a single server). I think
federated databases are where various parts of the database are split
across servers, but tables may be intact. 

That's my own understanding of the terminology.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PITR Base Backup on an idle 8.1 server

2007-06-04 Thread Simon Riggs
On Mon, 2007-06-04 at 12:55 +0200, Marco Colombo wrote:
 Greg Smith wrote:
  The way you're grabbing 
  files directly from the xlog directory only works because your commit 
  workload is so trivial that you can get away with it, and because you 
  haven't then tried to apply future archive logs.
 
 Well, it's only because I don't need future logs, just like I don't need 
 future files. Backup is at 2:00 AM, any change after that is 
 potentially lost. That includes e-mails, web contents, and database 
 contents. The database contents are in no way different to us.
 
 It's the your commit workload is so trivial that you can get away with 
 it I don't really get, but more on this later.
 
  In the general case, 
  circumventing the archiving when the backup is going on won't guarantee 
  everything is ordered just right for PITR to work correctly.
 
 Generic PITR? You mean if backup is at 2:00 AM and the server crashes 
 (all disks lost) at 2:00 PM, you want to be able to recover to some 
 time like 11:00 AM, and be precise about it? That's PITR to me - and the 
 precise part is key here... either the time or the transaction ID 
 would do, the point is being able to draw a line and say anything 
 before this is correct.

 my method

...is dangerous and anyone reading this thread would be well advised to
read the manual in full rather than treating this as a new and clever
technique. I'm adding this as a footnote so that the archives are clear
on this point, so we don't get loads of new DBAs picking up this idea
but missing the exact point of danger.

Making the assumption that its OK to archive WAL files in the pg_xlog
directory exposes you to the risk of having them deleted by the
archiver, which will invalidate your backup. That might not happen all
of the time, but I'm willing to bet that the time you need it is the
time it didn't work for you. Even if this doesn't effect you, it might
effect others, so I want to be certain to stamp this out before the fire
spreads.

You can still do the lock file test using a safe method. I'll document
that idea so we can steer people in the right direction.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Corruption of files in PostgreSQL

2007-06-04 Thread Scott Marlowe

Paolo Bizzarri wrote:

On 6/2/07, Tom Lane [EMAIL PROTECTED] wrote:

Paolo Bizzarri [EMAIL PROTECTED] writes:
 On 6/2/07, Tom Lane [EMAIL PROTECTED] wrote:
 Please provide a reproducible test case ...

 as explained above, the problem seems quite random. So I need to
 understand what we have to check.

In this context reproducible means that the failure happens
eventually.  I don't care if the test program only fails once in
thousands of tries --- I just want a complete self-contained example
that produces a failure.


As said above, our application is rather complex and involves several
different pieces of software, including Zope, OpenOffice both as
server and client, and PostgreSQL. We are absolutely NOT sure that the
problem is inside PostgreSQL.

What we are trying to understand is, first and foremost, if there are
known cases under which PostgreSQL can truncate a file.


I would suspect either your hardware (RAID controller, hard drive, cache 
etc) or your OS (kernel bug, file system bug, etc)


For instance:

http://lwn.net/Articles/215868/

documents a bug in the 2.6 linux kernel that can result in corrupted 
files if there are a lot of processes accessing it at once.



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Transactional DDL

2007-06-04 Thread David Fetter
On Sat, Jun 02, 2007 at 04:51:13PM -0400, Harpreet Dhaliwal wrote:
 my bad.. i replied to that in a wrong thread. sorry

That is one of many reasons that smart people don't top post.  Had you
decided not to top post, you would have realized instantly that you
were in the wrong thread.

If there is a word or phrase in the above that you do not understand,
please feel free to ask, but blithely continuing to top post will get
you a reputation you don't want.

Regards,
David.
 
 On 6/2/07, Leif B. Kristensen [EMAIL PROTECTED] wrote:
 
 On Saturday 2. June 2007 20:39, Ron Johnson wrote:
 You were politely asked not to top-post.
 
 On 06/02/07 11:46, Harpreet Dhaliwal wrote:
  So, while writing any technical document, would it be wrong to
  mention stored procedures in postgresql?
  what is the general convention?
 
 Did I miss something?  What does stored procedures have to do with
   Transactional DDL?
 
 I believe that he posted this in reply to the Stored procedures and
 functions thread. It kind of fits in there.
 --
 Leif Biberg Kristensen | Registered Linux User #338009
 http://solumslekt.org/ | Cruising with Gentoo/KDE
 My Jazz Jukebox: http://www.last.fm/user/leifbk/
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
 

-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!
Consider donating to PostgreSQL: http://www.postgresql.org/about/donate

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Inserting a path into Database

2007-06-04 Thread Michael Glaesemann


On Jun 4, 2007, at 15:10 , Ranieri Mazili wrote:

I need to insert a path into a table, but because \ I have a  
error by postgres, so how can I insert a path like bellow into a  
table:


insert into production values ('C:\Program Files\My program');


In v8.0 and later you can use dollar-quoted strings, e.g.,

select $_$C:\Program Files\My program$_$;
  ?column?
-
C:\Program Files\My program

http://www.postgresql.org/docs/8.2/interactive/sql-syntax- 
lexical.html#SQL-SYNTAX-DOLLAR-QUOTING


For 8.2, you can turn on standard_conforming_strings in  
postgresql.conf so \ will be treated literally:


http://www.postgresql.org/docs/8.2/interactive/runtime-config- 
compatible.html#GUC-STANDARD-CONFORMING-STRINGS


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] debugging C functions

2007-06-04 Thread Islam Hegazy
It worked today with me and I discovered what is the problem. The problem is 
that I have 2 structures that contain pointers. I inistiate variables from 
these structures as static. I can't declare the pointers inside the struct 
as static. Now the problem is that when the function is called a second 
time, it returns set of records, it finds the static pointers in the 
function but the pointers inside the struct are deleted. Here is a portion 
of the code:


typedef struct

{

int value;

struct TuplesList *next;

}TuplesList;

typedef struct

{

TuplesList *tuplesHead;

TuplesList *tuplesTail;

struct AggrQuery *nextQuery;

}AggrQuery;



Datum AggrFn(PG_FUNCTION_ARGS)

{

   static AggrQuery *queriesHead;

   static AggrQuery *queriesTail;

}

First call to AggrFn is correct, in the second call queriesHead and 
queriesTail are preserved but queriesHead-tuplesHead, for example, is 
rubbish from memory. I read in the PostgreSQL documentation 'However, if you 
want to allocate any data structures to live across calls, you need to put 
them somewhere else. The memory context referenced by multi_call_memory_ctx 
is a suitable location for any data that needs to survive until the SRF is 
finished running'. I don't understand this statement so I created the 
variables as static but it is not working with struct


Regards

Islam Hegazy



- Original Message - 
From: Gregory Stark [EMAIL PROTECTED]

To: Islam Hegazy [EMAIL PROTECTED]
Cc: Tom Lane [EMAIL PROTECTED]; Joe Conway [EMAIL PROTECTED]; 
pgsql-general@postgresql.org

Sent: Monday, June 04, 2007 5:50 AM
Subject: Re: [GENERAL] debugging C functions



Islam Hegazy [EMAIL PROTECTED] writes:


I do the same but I use the ddd debugger
1) Load the shared library from the SQL
2) Open the .c file of my function
3) Place the break points
4) Execute the sql statement 'Select * from Myfn(...);'

The result is displayed and the debugger doesn't stop at the breakpoints.


Are you sure you're attaching to the right process?

One way to do it is to run select pg_backend_pid() from psql and attach to
that pid.

--
 Gregory Stark
 EnterpriseDB  http://www.enterprisedb.com




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] SELECT all fields except bad_field from mytbl;

2007-06-04 Thread Erwin Brandstetter
On May 30, 7:42 am, [EMAIL PROTECTED] (PFC) wrote:
 Python example :

I found a decent solution for the existing plpgsql function (as
posted). Thanks a lot for the insight into the Python way, though!


Regards
Erwin


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] SELECT all fields except bad_field from mytbl;

2007-06-04 Thread Erwin Brandstetter
Or even, slightly shorter:

EXECUTE
 'SELECT '
 || array_to_string(ARRAY(
   SELECT a.attname
 FROM pg_class c, pg_namespace n, pg_attribute a
WHERE n.oid = c.relnamespace
  AND a.attrelid = c.oid
  AND a.attnum = 1
  AND n.nspname = 'myschema'
  AND c.relname = 'mytbl'
  AND a.attname  'bad_field'
ORDER by a.attnum), ', ')
  || ' FROM myschema.mytbl';

/E


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Encrypted column

2007-06-04 Thread Guy Rouillier

Ranieri Mazili wrote:

Hello,

I need to store users and passwords on a table and I want to store it 
encrypted, but I don't found documentation about it, how can I create a 
table with columns user and password with column password 
encrypted and how can I check if user and password are correct using 
a sql query ?


Many people consider two-way encryption to be insecure; two-way 
encryption means you can decrypt a value if you know the key, and it is 
insecure because you usually have to put the key into the source code. 
That means at least one person in your company, the programmer 
maintaining the source code, can learn all of your users' passwords. 
One way around that is to hash the value instead.  Then to validate, at 
runtime you hash the user-entered password using the same hash function, 
and validate that it matches the stored hash.  No one in your company 
ever knows end-user passwords.


--
Guy Rouillier

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] what to do when pg_cancel_backend() doesnt work?

2007-06-04 Thread Michael Fuhr
On Mon, Jun 04, 2007 at 12:00:10PM -0400, Alvaro Herrera wrote:
 Rhys Stewart escribió:
  a more readable version
 
 What is this buffer() function?

Looks like the PostGIS buffer() function, which calls GEOSBuffer()
in the GEOS library, which is where the code might be stuck.

http://postgis.refractions.net/docs/ch06.html#id2527029
http://geos.refractions.net/

-- 
Michael Fuhr

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] changing 'mons' in interval?

2007-06-04 Thread Klint Gore
On Mon, 4 Jun 2007 11:43:08 +0200, Martijn van Oosterhout [EMAIL PROTECTED] 
wrote:
 On Mon, Jun 04, 2007 at 06:51:37PM +1000, Klint Gore wrote:
  Is there a way to change mons in interval::text to the full word months
  without resorting to replace(aninterval::text,'mon','Month')?  If it
  can handle locales as well that would be good (but I could live without
  it).
 
 Have you considered using to_char to get the output in the exact format
 you want?

Yes.  It turned out to be a fairly complex statement with case's of
extracts to handle plurals and negatives.  That's what lead me to asking.

besides
http://www.postgresql.org/docs/7.4/interactive/functions-formatting.html
Warning: to_char(interval, text) is deprecated and should not be used
in newly-written code. It will be removed in the next version.

http://www.postgresql.org/docs/8.0/interactive/functions-formatting.html
Warning: to_char(interval, text) is deprecated and should not be used
in newly-written code. It will be removed in the next version.

And 8.1/8.2 seem to have been revived that warning but still has
to_char(interval,text)

klint.

+---+-+
: Klint Gore: Non rhyming:
: EMail   : [EMAIL PROTECTED]   :  slang - the:
: Snail   : A.B.R.I.:  possibilities  :
: Mail  University of New England   :  are useless   :
:   Armidale NSW 2351 Australia : L.J.J.  :
: Fax : +61 2 6772 5376 : :
+---+-+

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] PostgreSQL abnormally terminating with signal 5

2007-06-04 Thread Carlos H. Reimer
Hi,

I´ve a PostgreSQL 8.0.13 running in a Windows XP Pro 2002 SP2 box that is
terminating abnormally with signal code 5 every time I make a simple select
in one of our tables.

2007-06-04 21:24:12 LOG:  database system was shut down at 2007-06-04
21:23:42 E. South America Standard Time
2007-06-04 21:24:12 LOG:  checkpoint record is at 4/7D7FFC68
2007-06-04 21:24:12 LOG:  redo record is at 4/7D7FFC68; undo record is at
0/0; shutdown TRUE
2007-06-04 21:24:12 LOG:  next transaction ID: 1783448; next OID: 3111310
2007-06-04 21:24:12 LOG:  database system is ready
2007-06-04 21:27:39 LOG:  server process (PID 3512) was terminated by signal
5
2007-06-04 21:27:39 LOG:  terminating any other active server processes
2007-06-04 21:27:39 LOG:  all server processes terminated; reinitializing
2007-06-04 21:27:40 LOG:  database system was interrupted at 2007-06-04
21:24:12 E. South America Standard Time
2007-06-04 21:27:40 LOG:  checkpoint record is at 4/7D7FFC68
2007-06-04 21:27:40 LOG:  redo record is at 4/7D7FFC68; undo record is at
0/0; shutdown TRUE
2007-06-04 21:27:40 LOG:  next transaction ID: 1783448; next OID: 3111310
2007-06-04 21:27:40 LOG:  database system was not properly shut down;
automatic recovery in progress
2007-06-04 21:27:40 FATAL:  the database system is starting up
2007-06-04 21:27:40 LOG:  record with zero length at 4/7D7FFCA8
2007-06-04 21:27:40 LOG:  redo is not required
2007-06-04 21:27:40 LOG:  database system is ready


select version();
 PostgreSQL 8.0.13 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC) 3.4.2
(mingw-special)

Is there anything I can do to help solve this problem?

Thanks!

Reimer


Re: [GENERAL] There can be only one! How to avoid the highlander-problem.

2007-06-04 Thread Erwin Brandstetter
Hi Lew!

Thank you for your comments. I have elaborated on them.

On Jun 3, 7:22 pm, Lew [EMAIL PROTECTED] wrote:
(...)
 The trouble with this is that it models kingship as an attribute of every
 man.  (What, no female rulers allowed?)

Yeah, saddening, isn't it? Actually, for simplicity's sake I
restricted my model to a male, monarchistic world.


 The overhead of being not king is
 carried in every mankind record. This may suffice for your particular model,
 but if you were designing for evolution you'd have a problem.  Every new
 attribute of mankind would need a new column in the table - isDuke,
 isNoble, isHogSlopCleaner.


You are right, of course. (I switch to nation instead of people in
my examples like you did, as the term seems clearer.)

However, in your SQL model, you extracted nationality instead of
kingship. If every man has to be member of exactly one nation (which I
postulate), nationality can reside with the man. (we need
man.nation_id instead of nation.man_id) That leaves only the kingship
to be allocated. I postulate further that a king only be king of his
own people (rules out multiple kingships, too). So the king needs
only to have 1 attribute: man_id.
To make room for other roles, as you mentioned, I include a role_id.
However, roles must be as unique like the kingship. To enforce
uniqueness of one king (or other role) per nation I include the
seemingly redundant nation_id and impose a UNIQUE (nation_id, role_id)
on it.
To enforce that a man can only become king of his own people, I wrap
both (man_id, nation_id) in a FOREIGN KEY constraint on man.
PostgreSQL therefore requires a corresponding (redundant) UNIQUE
(nation_id, role_id) on man.
!NOTE that I do NOT reference table nation, so we have no circular
foreign-key constraints!


0.) Lets number the models:

CREATE TABLE nation
(
   nation_id INTEGER PRIMARY KEY
);

CREATE TABLE man
(
   man_id INTEGER PRIMARY KEY,
   nation_id INTEGER NOT NULL REFERENCES nation (nation_id) ON UPDATE
CASCADE ON DELETE CASCADE
);

CREATE TABLE role  -- role is non-reserved word in postgresql or
SQL2003, but reserved in SQL99
(
   man_id INTEGER PRIMARY KEY REFERENCES man (man_id) ON UPDATE
CASCADE ON DELETE CASCADE,
   nation_id INTEGER,
   role_id INTEGER,
   UNIQUE (nation_id, role_id)
   FOREIGN KEY (man_id, nation_id) REFERENCES man (man_id, nation_id)
ON UPDATE CASCADE ON DELETE CASCADE
);


This makes sense if we have a lot of men per nation and an unknown
number of unique roles per nation. I will simplify this model step by
step now, along with simplified conditions:


1.) First, lets get rid of multiple roles. My model only needs
kingship. So I replace table role with the following table
king (the rest is the same). :

CREATE TABLE king
(
   king_id INTEGER PRIMARY KEY REFERENCES man (man_id) ON UPDATE
CASCADE ON DELETE CASCADE,
   nation_id INTEGER UNIQUE,
   FOREIGN KEY (man_id, nation_id) REFERENCES man (man_id, nation_id)
ON UPDATE CASCADE ON DELETE CASCADE
);


2.) Now we can further simplify the structure. Skip the table king
and merge kingship as an attribute into table man. This makes sense
with one (or a small number of ) known role(s).
Adds a field to _every_ man and gets rid of one tuple per king and the
overhead for that extra table.
Whether this is preferable over 1.) depends on the typical number of
men per nation. If there is more than just a few, you should stick to
1.). If there is only a few, however, you gain something.
Note, how we reference nation(nation_id) twice (!), but only one time
is NOT NULL.
We are still avoiding circular references.

CREATE TABLE man
(
   man_id INTEGER PRIMARY KEY,
   nation_id INTEGER NOT NULL REFERENCES nation (nation_id) ON UPDATE
CASCADE ON DELETE CASCADE,
   king_id INTEGER UNIQUE REFERENCES nation (nation_id) ON UPDATE
CASCADE ON DELETE CASCADE,
   CHECK ((nation_id = king_id))  --  needed to make sure a man
can only become king of his own people.
);


3.) As an improvement over 2.) we can merge kingship into nation (as
you suggested).
Note the ON DELETE SET NULL clause, that allows a king to die.
Actually I would pass on kingship to another man (or NULL if none are
left) per trigger, much like in my initial post:
trg_mankind_delaft().
Note also that king_id isn't NOT NULL, so we need to be prepared for
nations without a king (king_id IS NULL).
To enforce a king we'd set it NOT NULL DEFAULT 0, but then we'd need
a dummy man with man_id = 0 to serve referential integrity and that's
where the circular references begin to bite. Because the dummy man
needs a nation first. This could only be solved by entering a dummy
nation and a dummy man before enforcing referential integrity.
We also need triggers BEFORE INSERT AND UPDATE to check that the king
is member of the nation
IF NEW.king_id IS NOT NULL AND nation_id IS DISTINCT FROM
NEW.nation_id FROM man WHERE man_id = NEW.king_id THEN
   RAISE EXCEPTION 'Usurper!';
END IF;
Now we have to store only one field per nation and not 

Re: [GENERAL] what to do when pg_cancel_backend() doesnt work?

2007-06-04 Thread Alvaro Herrera
Michael Fuhr escribió:
 On Mon, Jun 04, 2007 at 12:00:10PM -0400, Alvaro Herrera wrote:
  Rhys Stewart escribió:
   a more readable version
  
  What is this buffer() function?
 
 Looks like the PostGIS buffer() function, which calls GEOSBuffer()
 in the GEOS library, which is where the code might be stuck.
 
 http://postgis.refractions.net/docs/ch06.html#id2527029
 http://geos.refractions.net/

Yeah, that's what Rhys told me by private email.  I suggested asking the
PostGIS guys, but if GEOS is intended to be a platform-neutral module, I
guess it's not very likely that they'll be adding a CHECK_FOR_INTERRUPTS
in the loops there.

(Peeking much further starts to get too time-consuming for me so I left
it at that.)

-- 
Alvaro Herrera  Developer, http://www.PostgreSQL.org/
El destino baraja y nosotros jugamos (A. Schopenhauer)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] $libdir

2007-06-04 Thread Ian Harding

I know this is a question that gets asked a zillion times and is
almost always pilot error.

I installed PostgreSQL 8.2.x and the Tsearch2 package on NetBSD which
went fine, but I can't get the tsearch2.sql file to run.

The usual error about file does not exist relative to
$libdir/tsearch2 gets generated.

The docs say $libdir gets expanded to the result of

pg_config --pkglibdir

which in my case returns /usr/pkg/lib/postgresql.

In that directory are the libtsearch2.so... files along with lots of
others, with 755 permissions, owned by root.

Those permissions and ownership look OK to me, read and execute by the
whole world.

What else could be wrong?

- Ian

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/