Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-17 Thread Massa, Harald Armin
Dave,

please also check out the licence and costs terms in detail.
Especially: is it given that the planned usage willl continue to be
within the allowed bounds for MySQL-GPL? Are otherwise the costs for
MySQL-commercial budgeted or a reserve founded?

PostgreSQL has here a GIANT advantage with a very very clear licence
which allows basically anything relevant; without the need to buy
commerical licences.

Harald

-- 
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
no fx, no carrier pigeon
-
%s is too gigantic of an industry to bend to the whims of reality

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Training and open source

2009-12-17 Thread Picavet Vincent
Hi,
Would one day the Talend people understand that Astroturfing is definitly not a 
good marketing strategy ?
Please stop your lame posting once and for all, we are all tired of reading 
your useless babbling.
Vincent

 -Message d'origine-
 De : pgsql-general-ow...@postgresql.org 
 [mailto:pgsql-general-ow...@postgresql.org] De la part de mrciken
 Envoyé : mercredi 16 décembre 2009 15:30
 À : pgsql-general@postgresql.org
 Objet : [GENERAL] Training and open source
 
 
 Hello,
  
 Thank you all for your advice. It has been helpful to read you. 
 I think we will be looking into open source data integration 
 as it seems to be the most flexible option in regards to our 
 business. 
 
 A question: what are the assistance and training capabilities 
 open source software gives to their users? We have downloaded 
 a few solutions - downloads include Talend we are enjoying - 
 we are testing and feel that one day, we might need a better 
 insight of the software. 
 
 Thank you for your help.
 --
 View this message in context: 
 http://old.nabble.com/Training-and-open-source-tp26811930p2681
 1930.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] flagging first row inserted for each group of key

2009-12-17 Thread Ivan Sergio Borgonovo
I've a web application.

I'm logging data related to the landing page and subsequent selected
hits.

create table track_hit (
  hitid serial not null,
  /* pk? I don't mind if serial wrap around
  pk could be (hitid, tracking_time) */
  esid varchar(32), -- related to session
  tracking_time timestamp not null default now(),
  -- some other stats
  -- first_hit boolean not null default false, -- ???
);
  
I'd like to be sure I just count one first hit in a session (same
esid) in an efficient way that means I'd like to mark them with a
flag and avoid a group by, min subquery cycle when I'm doing
reporting.
I can't trust the browser and I just want one first hit for each
esid, no matter if they have the same tracking_time.

Of course the problem is concurrency, but maybe I missed some pg
trick that could help me.

I'm on 8.3 and no plan to move to 8.4 shortly, so no windowing
functions that will make reporting easier/faster.

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] flagging first row inserted for each group of key

2009-12-17 Thread A. Kretschmer
In response to Ivan Sergio Borgonovo :
 I've a web application.
 
 I'm logging data related to the landing page and subsequent selected
 hits.
 
 create table track_hit (
   hitid serial not null,
   /* pk? I don't mind if serial wrap around
   pk could be (hitid, tracking_time) */
   esid varchar(32), -- related to session
   tracking_time timestamp not null default now(),
   -- some other stats
   -- first_hit boolean not null default false, -- ???
 );
   
 I'd like to be sure I just count one first hit in a session (same
 esid) in an efficient way that means I'd like to mark them with a

select distinct on (esid) esid, tracking_time from track_hit order by esid, 
tracking_time;

returns only one record for each esid, ordered by tracking_time. Should
work with 8.x, maybe sice 7.x (I'm not sure)


HTH, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-17 Thread Erik Jones

On Dec 16, 2009, at 10:30 PM, Craig Ringer wrote:

 - If you don't care about your data, MySQL used with MyISAM is *crazy* fast 
 for lots of small simple queries.

This one causes me no end of grief as too often it's simply touted as MyISAM 
is fast(er) while leaving of the bit about for lots of small, simple 
queries.  Developers then pick MySQL with MyISAM storage and then scratch 
their heads saying, But!  I heard it was faster..., when I tell them the 
reason their app is crawling is because they have even moderately complex reads 
or writes starving out the rest of their app thanks to the table locks required 
by MyISAM.  As you mentioned, for the type of active workloads that MyISAM is 
good for, you might as well just use memcache over something more reliable 
and/or concurrent, or even a simple key-value or document store if you really 
don't need transactions.

Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] flagging first row inserted for each group of key

2009-12-17 Thread Ivan Sergio Borgonovo
On Thu, 17 Dec 2009 10:38:32 +0100
A. Kretschmer andreas.kretsch...@schollglas.com wrote:

 In response to Ivan Sergio Borgonovo :
  I've a web application.
  
  I'm logging data related to the landing page and subsequent
  selected hits.
  
  create table track_hit (
hitid serial not null,
/* pk? I don't mind if serial wrap around
pk could be (hitid, tracking_time) */
esid varchar(32), -- related to session
tracking_time timestamp not null default now(),
-- some other stats
-- first_hit boolean not null default false, -- ???
  );

  I'd like to be sure I just count one first hit in a session
  (same esid) in an efficient way that means I'd like to mark them
  with a
 
 select distinct on (esid) esid, tracking_time from track_hit order
 by esid, tracking_time;
 
 returns only one record for each esid, ordered by tracking_time.
 Should work with 8.x, maybe sice 7.x (I'm not sure)

I think I've tried to resolve a concurrency problem in the wrong
place... still... what is the difference between:

select min(hitid) as h from track_hit group by esid;

and

select distinct on (esid) hitid from track_hit order by esid,
track_time;

I haven't enough test data to see if they perform differently. The
second form seems to perform a little bit faster.
I'd expect the opposite: the first performing better.

I think I'll add an index on track_time for reporting and maybe make
primary key (hitid, tracking_time).
I don't want to be bothered by hitid wrap around, so I don't want to
make it a pk alone, still I may need a pk.

thanks

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] FW: postgres 8.2.4 cores on SUN

2009-12-17 Thread Abraham, Danny
Hi,

postgres cores on SUN 2.9. Disk is in 100% IO.

(dbx) where
  [1] 0x65eacdd4(0x7fffa36c, 0x400, 0x10032d4e8, 0x1, 0x48, 
0xd6), at 0x65eacdd4
  [2] XLogFileInit(0x48, 0xd6, 0x7fffa85d, 0x1, 0x488000, 0x0), at 
0x10007955c
  [3] XLogWrite(0x48, 0x6, 0x1, 0x1004883f8, 0x100, 0x1192ca8), at 
0x100078990
  [4] XLogFlush(0x7, 0x1192cf0, 0x10048844c, 0x7, 0x1004a3524, 0x0), at 
0x100079428
  [5] RecordTransactionCommit(0x0, 0x296f6120, 0x1004883d0, 0x0, 
0x1004883d5, 0x0), at 0x1000713c0
  [6] CommitTransaction(0x1, 0x49fc00, 0x100488320, 0x488000, 0x1, 
0x1), at 0x1000720a8
  [7] CommitTransactionCommand(0x1, 0x108, 0x4, 0x100488320, 0x0, 0x100072af8), 
at 0x100072c00
  [8] finish_xact_command(0x0, 0x28, 0x1, 0x4a, 0x1, 0x1004a00cc), 
at 0x100202adc
  [9] exec_simple_query(0x1005f7050, 0x1005f8b68, 0x2, 0x1006c42e8, 0x0, 
0x1005f8b88), at 0x1002000fc
  [10] PostgresMain(0x0, 0x0, 0x1, 0x1, 0x1004a3400, 0x51), at 0x10020581c
  [11] BackendRun(0x4, 0x0, 0x35b800, 0x35b800, 0x10035b800, 0x0), at 
0x1001cd0d8
  [12] BackendStartup(0x100540400, 0x100543150, 0x314c00, 0x1004a3400, 0x0, 
0x100314e72), at 0x1001cc300
  [13] ServerLoop(0x100540400, 0x4, 0x6, 0x4a54c86f, 0x, 0x1004bd370), at 
0x1001c943c
  [14] PostmasterMain(0x10049f400, 0x1cac00, 0x100507c00, 0x1, 0x1, 
0x1), at 0x1001c8bc0
  [15] main(0x1, 0x1, 0x352c00, 0x0, 0x10051d4a0, 0x100507f50), at 0x10016faa4

Any idea?

Thanks

Danny


[GENERAL] postgres 8.2.4 cores on SUN

2009-12-17 Thread Abraham, Danny
Hi,

postgres cores on SUN 2.9. Disk is in 100% IO.

(dbx) where
  [1] 0x65eacdd4(0x7fffa36c, 0x400, 0x10032d4e8, 0x1, 0x48, 
0xd6), at 0x65eacdd4
  [2] XLogFileInit(0x48, 0xd6, 0x7fffa85d, 0x1, 0x488000, 0x0), at 
0x10007955c
  [3] XLogWrite(0x48, 0x6, 0x1, 0x1004883f8, 0x100, 0x1192ca8), at 
0x100078990
  [4] XLogFlush(0x7, 0x1192cf0, 0x10048844c, 0x7, 0x1004a3524, 0x0), at 
0x100079428
  [5] RecordTransactionCommit(0x0, 0x296f6120, 0x1004883d0, 0x0, 
0x1004883d5, 0x0), at 0x1000713c0
  [6] CommitTransaction(0x1, 0x49fc00, 0x100488320, 0x488000, 0x1, 
0x1), at 0x1000720a8
  [7] CommitTransactionCommand(0x1, 0x108, 0x4, 0x100488320, 0x0, 0x100072af8), 
at 0x100072c00
  [8] finish_xact_command(0x0, 0x28, 0x1, 0x4a, 0x1, 0x1004a00cc), 
at 0x100202adc
  [9] exec_simple_query(0x1005f7050, 0x1005f8b68, 0x2, 0x1006c42e8, 0x0, 
0x1005f8b88), at 0x1002000fc
  [10] PostgresMain(0x0, 0x0, 0x1, 0x1, 0x1004a3400, 0x51), at 0x10020581c
  [11] BackendRun(0x4, 0x0, 0x35b800, 0x35b800, 0x10035b800, 0x0), at 
0x1001cd0d8
  [12] BackendStartup(0x100540400, 0x100543150, 0x314c00, 0x1004a3400, 0x0, 
0x100314e72), at 0x1001cc300
  [13] ServerLoop(0x100540400, 0x4, 0x6, 0x4a54c86f, 0x, 0x1004bd370), at 
0x1001c943c
  [14] PostmasterMain(0x10049f400, 0x1cac00, 0x100507c00, 0x1, 0x1, 
0x1), at 0x1001c8bc0
  [15] main(0x1, 0x1, 0x352c00, 0x0, 0x10051d4a0, 0x100507f50), at 0x10016faa4

Any idea?

Thanks

Danny


Re: [GENERAL] Slow select

2009-12-17 Thread Sam Mason
On Wed, Dec 16, 2009 at 05:18:12PM -0800, yuliada wrote:
 Sam Mason wrote:
  How about combining all 1000 selects into one?
 
 I can't combine these selects into one, I need to run them one after
 another.

Hum, difficult.  What other information is in the row that you need
back?  Can you turn the table structure around somehow so that the
value is the primary key and hence only a single row needs to be found
each time.

Other than that, I think you just need faster disks.

 Bitmap Heap Scan on bn_stringvalue v  (cost=228.40..8688.70 rows=2172 
 width=90) (actual time=1129.767..1781.403 rows=104 loops=1)
   Recheck Cond: (lower((value)::text) = 'esr'::text)
   -  Bitmap Index Scan on idx_stringv  (cost=0.00..227.86 rows=2172 
 width=0) (actual time=1107.974..1107.974 rows=104 loops=1)
 Index Cond: (lower((value)::text) = 'esr'::text)
 Total runtime: 1781.566 ms

It looks like it's doing reasonable things.  I assume you've got a
single disk servicing this, 1781 / (104*2) = 8ms average seek time.

Clustering on value may help, but it's going to take a while.  Its
value depends on how common this operation is compared to other ones.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Table Partitioning Advice Request

2009-12-17 Thread Vincenzo Romano
Hi all.

I'm planning to implement table partitioning as suggested (among
other sources) in the official documentation.
I'm using v8.4.2 at the moment.

My case is far from the general one as:
1. I'll never UPDATE or DELETE rows from partitioned tables (only INSERTs)
2. Rows will be inserted one-by-one or, in the worse case, in bunches
of two or three
3. Partitioning will be based upon TIMESTAMP ranges
4. The virtual tables should approach (and possibly go past) 100M rows
5. Most (99%) of the INSERTs (and possibly SELECTs) will actually
operate on a rather small number of partitions (hardly more than 2).

My main TABLE is like the following one:

CREATE TABLE events (
  eventtype text not null,
  item_id int8 not null,
  event_date timestamp not null default now(),
  row_date timestamp not null default now(),
  event_id serial8 primary key
);

where the partitioning would happen over the values of the event_date column.
The row_date columns is to record the row creation TIMESTAMP as events
can be created relatively to the past, the future or the current time.

In my mind a solution which is simple to maintain is to add a simple
RULE ... ON INSERT for every newly created partition table.
The TRIGGER approach, in my opinion, is much more complex to maintain
as either the body of the function needs to be rewritten as new
partitions are added, or
some external TABLE lookup is needed to choose the actual table name
to be used for a (dynamically created) INSERT.

Now the questions.

1.  As the number of RULEs will grow with the time, how will change
the efficiency of the query planner while browsing among the RULES?
2. In the case the previous answer would lead to bad news for me, is
there any better/different approach to partitioning with TRIGGERs?
3. Is there any more general advise for such approaches?

-- 
Vincenzo Romano
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Partitioning Advice Request

2009-12-17 Thread Sam Jas
Rule is not advisable, Trigger is the best solution. 

---
Thanks 
Sam Jas

--- On Thu, 17/12/09, Vincenzo Romano vincenzo.rom...@notorand.it wrote:

From: Vincenzo Romano vincenzo.rom...@notorand.it
Subject: [GENERAL] Table Partitioning Advice Request
To: pgsql-general@postgresql.org
Date: Thursday, 17 December, 2009, 11:05 AM

Hi all.

I'm planning to implement table partitioning as suggested (among
other sources) in the official documentation.
I'm using v8.4.2 at the moment.

My case is far from the general one as:
1. I'll never UPDATE or DELETE rows from partitioned tables (only INSERTs)
2. Rows will be inserted one-by-one or, in the worse case, in bunches
of two or three
3. Partitioning will be based upon TIMESTAMP ranges
4. The virtual tables should approach (and possibly go past) 100M rows
5. Most (99%) of the INSERTs (and possibly SELECTs) will actually
operate on a rather small number of partitions (hardly more than 2).

My main TABLE is like the following one:

CREATE TABLE events (
  eventtype text not null,
  item_id int8 not null,
  event_date timestamp not null default now(),
  row_date timestamp not null default now(),
  event_id serial8 primary key
);

where the partitioning would happen over the values of the event_date column.
The row_date columns is to record the row creation TIMESTAMP as events
can be created relatively to the past, the future or the current time.

In my mind a solution which is simple to maintain is to add a simple
RULE ... ON INSERT for every newly created partition table.
The TRIGGER approach, in my opinion, is much more complex to maintain
as either the body of the function needs to be rewritten as new
partitions are added, or
some external TABLE lookup is needed to choose the actual table name
to be used for a (dynamically created) INSERT.

Now the questions.

1.  As the number of RULEs will grow with the time, how will change
the efficiency of the query planner while browsing among the RULES?
2. In the case the previous answer would lead to bad news for me, is
there any better/different approach to partitioning with TRIGGERs?
3. Is there any more general advise for such approaches?

-- 
Vincenzo Romano
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



  The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. 
http://in.yahoo.com/

Re: [GENERAL] Table Partitioning Advice Request

2009-12-17 Thread Vincenzo Romano
Why?

If you have to choose among a couple hundred partition tables, the
trigger function body is far from trivial!
You really think that calling and running a trigger function for every
line is the best solution?

2009/12/17 Sam Jas samja...@yahoo.com

 Rule is not advisable, Trigger is the best solution.

 ---
 Thanks
 Sam Jas

 --- On Thu, 17/12/09, Vincenzo Romano vincenzo.rom...@notorand.it wrote:

 From: Vincenzo Romano vincenzo.rom...@notorand.it
 Subject: [GENERAL] Table Partitioning Advice Request
 To: pgsql-general@postgresql.org
 Date: Thursday, 17 December, 2009, 11:05 AM

 Hi all.

 I'm planning to implement table partitioning as suggested (among
 other sources) in the official documentation.
 I'm using v8.4.2 at the moment.

 My case is far from the general one as:
 1. I'll never UPDATE or DELETE rows from partitioned tables (only INSERTs)
 2. Rows will be inserted one-by-one or, in the worse case, in bunches
 of two or three
 3. Partitioning will be based upon TIMESTAMP ranges
 4. The virtual tables should approach (and possibly go past) 100M rows
 5. Most (99%) of the INSERTs (and possibly SELECTs) will actually
 operate on a rather small number of partitions (hardly more than 2).

 My main TABLE is like the following one:

 CREATE TABLE events (
   eventtype text not null,
   item_id int8 not null,
   event_date timestamp not null default now(),
   row_date timestamp not null default now(),
   event_id serial8 primary key
 );

 where the partitioning would happen over the values of the event_date column.
 The row_date columns is to record the row creation TIMESTAMP as events
 can be created relatively to the past, the future or the current time.

 In my mind a solution which is simple to maintain is to add a simple
 RULE ... ON INSERT for every newly created partition table.
 The TRIGGER approach, in my opinion, is much more complex to maintain
 as either the body of the function needs to be rewritten as new
 partitions are added, or
 some external TABLE lookup is needed to choose the actual table name
 to be used for a (dynamically created) INSERT.

 Now the questions.

 1.  As the number of RULEs will grow with the time, how will change
 the efficiency of the query planner while browsing among the RULES?
 2. In the case the previous answer would lead to bad news for me, is
 there any better/different approach to partitioning with TRIGGERs?
 3. Is there any more general advise for such approaches?

 --
 Vincenzo Romano
 NON QVIETIS MARIBVS NAVTA PERITVS

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

 
 The INTERNET now has a personality. YOURS! See your Yahoo! Homepage.


--
Vincenzo Romano
NotOrAnd Information Technologies
cel. +39 339 8083886  | gtalk. vincenzo.rom...@notorand.it
fix. +39 0823 454163  | skype. notorand.it
fax. +39 02 700506964 | msn.   notorand.it
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to get text for a plpgsql variable from a file.

2009-12-17 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE- 
Hash: RIPEMD160
NotDashEscaped: You need GnuPG to verify this message  


 I need a long text form from a file in my plpgsql variable.
 Can anyone think of a more straightforward way to read the file than
 the following:  

Sounds like a job for an 'untrusted' procedural language. 
Here's a quick example using plperlu: 

===
\t 
\o /tmp/gtest
SELECT 'OH HAI';
\o  
\t  

CREATE OR REPLACE FUNCTION
read_file(TEXT)   
RETURNS TEXT  
LANGUAGE plperlu  
AS $bc$   
  use strict; 
  use warnings;   

  my $filename = shift;

  index($filename, '/')
and die qq{File name must be an absolute path\n};

  open my $fh, '', $filename
or die qq{Could not open file $filename: $!\n};

  my $string;
  {  
local $/;
$string = $fh;
  } 

  close $fh
or die qq{Could not close file $filename: $!\n};

  return $string;

$bc$;


CREATE OR REPLACE FUNCTION gtest()
RETURNS TEXT  
LANGUAGE plpgsql  
AS $bc$   
DECLARE   
  external_file_contents TEXT;
BEGIN
  SELECT INTO external_file_contents read_file('/tmp/gtest');
  RETURN 'GOT:' || external_file_contents;
END
$bc$;

SELECT gtest();
===

Piping all of the above into psql gives:

Output format is unaligned.
Showing only tuples.
Tuples only is off.
Output format is aligned.
CREATE FUNCTION
CREATE FUNCTION
   gtest

 GOT:OH HAI


--
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 200912170920
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAksqPrkACgkQvJuQZxSWSsgjFACfebEHE6rLGs04w6mptctG7nuI
IXwAoJmLOwavrXyaU+4lHx3OsIws4JOc
=58sb
-END PGP SIGNATURE-



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-17 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160   


 - MySQL is horizontally scalable via clustering and multi-master
 replication (though you must beware of numerous gotchas). PostgreSQL can
 be used with read-only slaves via Slony/Bucardo/etc replication, but is
 limited to a single authoriative master.

 (There's work ongoing to enable readonly hot standby slaves with
 failover, but no multi-master is on the horizion).

Well that's refreshing: usually Bucardo is mistaken for a system that
only does master-master and not master-slave, rather than vice-versa. :)
You can have two authoritative masters with Bucardo, in addition to
any number of slaves radiating from one or both of those (as well as
just simple master-slaves).

 - It's a cool tool when you want to query and integrate data from all
 sorts of disparate sources, thanks to its support for pluggable storage
 engines. If you want something for data analysis and integration rather
 than safe storage it's well worth looking at.

What sort of sources? I'm curious here to find areas we can improve upon.

- --
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 200912170927
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAksqP9kACgkQvJuQZxSWSshbUQCg3CfvpeivDi6gg2bkr74I17Qe
RKAAnRu3GTUQ3Bg3R2Fq3eOsgK4N0xd1
=5r9R
-END PGP SIGNATURE-



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-17 Thread Merlin Moncure
On Wed, Dec 16, 2009 at 4:02 PM, Gauthier, Dave dave.gauth...@intel.com wrote:
 Hi Everyone:

 Tomorrow, I will need to present to a group of managers (who know nothing
 about DBs) why I chose to use PG over MySQL in a project, MySQL being the
 more popular DB choice with other engineers, and managers fearing things
 that are “different” (risk).  I have a few hard tecnical reasons (check
 constraint, deferred constraint checking, array data type), but I’m looking
 for a “it’s more reliable” reasons.  Again, the audience is managers.  Is
 there an impartial,  3rd party evaluation of the 2 DBs out there that
 identifies PG as being more reliable?  It might mention things like fewer
 incidences of corrupt tables/indexes, fewer deamon crashes, better recovery
 after system crashes, etc... ?

The #1 useful/practical/business sense feature that postgresql has
over mysql and afaik, most commercial databases even, is transaction
DDL.  You can update live systems and if anything goes wrong your
changes roll back.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Partitioning Advice Request

2009-12-17 Thread David Fetter
On Thu, Dec 17, 2009 at 02:41:40PM +0100, Vincenzo Romano wrote:
  2009/12/17 Sam Jas samja...@yahoo.com
  Rule is not advisable, Trigger is the best solution.

 If you have to choose among a couple hundred partition tables, the
 trigger function body is far from trivial!

It's possible to generate such code and deploy it automatically.

 You really think that calling and running a trigger function for every
 line is the best solution?

Yes.  The trigger function is choosing from a small subset of the
tables, or you know which tables exactly the rows are going into and
insert them there.

Oh, and please do trim, and don't top-post.  I've fix this in this
post.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Partitioning Advice Request

2009-12-17 Thread Vincenzo Romano
2009/12/17 David Fetter da...@fetter.org:
 You really think that calling and running a trigger function for every
 line is the best solution?

 Yes.  The trigger function is choosing from a small subset of the
 tables, or you know which tables exactly the rows are going into and
 insert them there.

So I understand that when a table has multiple RULEs, the planner just
browse all of them
to check the one(s) that will apply.
What I have in mind is a side table with some bookkeeping data to be
used by the trigger
function to select the actual table to be used.
Do you think this is faster than RULEs?

Is there any performance study for the trigger-based implementation?

 Oh, and please do trim, and don't top-post.  I've fix this in this
 post.

Sorry! :-)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Automatic truncation of character values casting to the type of a column type

2009-12-17 Thread Justin Bailey
On Wed, Dec 16, 2009 at 7:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Really?  Works for me, in everything back to 7.3.


I must be missing something, because this function fails:

  CREATE OR REPLACE FUNCTION insertShort()
  RETURNS VOID AS
  $BODY$
  DECLARE
s Short.shortCol%TYPE;
  BEGIN
SELECT longCol INTO s FROM Long
WHERE char_length(longCol)  20;
 INSERT INTO Short (shortCol) VALUES (s);
  END;

$BODY$
  LANGUAGE 'plpgsql' VOLATILE;

If s was automatically truncated, the insert would succeed, but it
fails with a value too long error.

p.s. I used the definitions from my prior email:

 CREATE TABLE Long (longCol varchar(40) );
 CREATE TABLE Short (shortCol varchar(20) );

INSERT INTO Long VALUES ('FOOBAR'), ('BAZ'),
(CAST('2314J1L234J21LK342JKL32J32KL4J123LK4J13L4' AS VARCHAR(40)));

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Automatic truncation of character values casting to the type of a column type

2009-12-17 Thread Tom Lane
Justin Bailey jgbai...@gmail.com writes:
 On Wed, Dec 16, 2009 at 7:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Really?  Works for me, in everything back to 7.3.

 If s was automatically truncated, the insert would succeed, but it
 fails with a value too long error.

Oh, I thought the failure was the behavior you wanted.  There's
no automatic truncation here --- perhaps putting an exception
block around it would be the way to go?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Partitioning Advice Request

2009-12-17 Thread Vincenzo Romano
2009/12/17 Sam Jas samja...@yahoo.com

 Rule is not advisable, Trigger is the best solution.

Does the trigger solution need the TABLE CHECK constraint?
It looks to me it won't.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Partitioning Advice Request

2009-12-17 Thread Dimitri Fontaine
Vincenzo Romano vincenzo.rom...@notorand.it writes:
 Is there any performance study for the trigger-based implementation?

Consider that if you use RULE to partition, when you DROP a partition
the INSERTs are locked out because the query depends on the table being
droped.

That alone could lead you to stop considering RULEs for partitioning.
-- 
dim

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Automatic truncation of character values casting to the type of a column type

2009-12-17 Thread Sam Mason
On Thu, Dec 17, 2009 at 10:24:28AM -0500, Tom Lane wrote:
 Justin Bailey jgbai...@gmail.com writes:
  If s was automatically truncated, the insert would succeed, but it
  fails with a value too long error.
 
 Oh, I thought the failure was the behavior you wanted.  There's
 no automatic truncation here --- perhaps putting an exception
 block around it would be the way to go?

I think the OP wants to be able to use the %TYPE magic in more places
than the grammar currently accepts.  For example, so that you can use it
in a CAST expression as per some of the examples.

Would using DOMAINs be a better alternative?  It certainly works now.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Automatic truncation of character values casting to the type of a column type

2009-12-17 Thread Adrian Klaver
On Wednesday 16 December 2009 5:05:19 pm Justin Bailey wrote:
 Greetings! I am trying to avoid the old problem of inserting a 40
 character string into a 20 character field. However, I'd like to avoid
 hard-coding the acceptable length (20). Is there a way to say cast to
 the same type as a given column? E.g., if I have tables Long and
 Short:

  CREATE TABLE Long (longCol varchar(40) )
  CREATE TABLE Short (shortCol varchar(20) )

 And this data:

   INSERT INTO Long VALUES ('FOOBAR'), ('BAZ'),
 (CAST('2314J1L234J21LK342JKL32J32KL4J123LK4J13L4' AS VARCHAR(40)))

 Can make values inserted into shortCol have a maximum length of 20
 without hard-coding that value? Something like:

  INSERT INTO Short (ShortCol)
        (SELECT CAST(Long.longCol as Short.shortCol) FROM LONG)

 I am using postgres 8.2.

 Clearly this is a toy example. In the real world, I insert or update
 values in my target table using a stored procedure. I want to
 future-proof my stored procedure against the column lengths on the
 target table changing. Otherwise, I have to update my sproc with new
 lengths if the table ever changes. I have tried using the PL/PGSQL
 feature where types can be copied in a declaration:

  DECLARE
     myVal Short.shortCol%TYPE;
  ...

 But I can still put values which are too long into that variable, so
 it doesn't help me. Sadly, using the same syntax in a CAST fails in
 various ways:

  UPDATE Short SET shortCol = CAST(myVal AS Short.shortCol) -- schema
 Short does not exist error
  UPDATE Short SET shortCol = CAST(myVal AS Short.shortCol%TYPE) -- syntax
 error UPDATE Short SET shortCol = CAST(myVal AS (Short).shortCol) -- syntax
 error

 Thanks in advance for any advice.

 Justin

My solution would be to declare the varchar without a length restriction and 
not 
worry. Right off the top I see two potential problems with the truncation 
procedure you are proposing. One, is if you go and reduce the field width for 
the table column you will have the same truncate error. Two, what happens to 
the 20 characters you are losing? They where important once are they not now?

-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Partitioning Advice Request

2009-12-17 Thread Vincenzo Romano
2009/12/17 Dimitri Fontaine dfonta...@hi-media.com:
 Vincenzo Romano vincenzo.rom...@notorand.it writes:
 Is there any performance study for the trigger-based implementation?

 Consider that if you use RULE to partition, when you DROP a partition
 the INSERTs are locked out because the query depends on the table being
 droped.

 That alone could lead you to stop considering RULEs for partitioning.

In that case I would also drop the relevant rule(s).
But it seems anyway that the rule system would need to scan all the
rules in order to know which one(s) to apply.
And, at least in my case, I would have hundreds of rules ...

Thanks for the hint, anyway.

-- 
Vincenzo Romano
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-17 Thread Gauthier, Dave
They just called the meeting, or at least that part of it.  There seems to be a 
battle brewing, some MySQL advocates are angry, concerned, fearful, ... I dont 
know why for sure.  My managers, who advocate my position and PG are preparing, 
but the decision will be made by higher-ups who really don't know anything 
about DBs.  They just talk in terms of risk and cost and schedules and yes, 
licenses.  So I'll let them articulate the defense of PG on those terms.  I'm 
just an engineer.  I've been feeding them the valuable input I've been getting 
from this forumn and thanks to all who have contributed.  Really!  

-Original Message-
From: Massa, Harald Armin [mailto:c...@ghum.de] 
Sent: Thursday, December 17, 2009 3:14 AM
To: Gauthier, Dave
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Justifying a PG over MySQL approach to a project

Dave,

please also check out the licence and costs terms in detail.
Especially: is it given that the planned usage willl continue to be
within the allowed bounds for MySQL-GPL? Are otherwise the costs for
MySQL-commercial budgeted or a reserve founded?

PostgreSQL has here a GIANT advantage with a very very clear licence
which allows basically anything relevant; without the need to buy
commerical licences.

Harald

-- 
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
no fx, no carrier pigeon
-
%s is too gigantic of an industry to bend to the whims of reality

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Extended Query using the Frontend/Backend Protocol 3.0

2009-12-17 Thread Raimon Fernandez
Hello again,


I'm trying to integrate the extended query protocol with my libraries.

I'm sending a simple SELECT to validate the method, but I'm getting an Invalid 
Message Format.


50 = P
00 00 00 29 = length
6D7973746174656D656E74 00 = mystatement + null
73656C656374202A2066726F6D206D797461626C653B 00 = select * from mytable; + null
00 00 = number of parameters, zero

any idea ?

thanks and regards,


raimon




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-17 Thread Grzegorz Jaśkiewicz
On Thu, Dec 17, 2009 at 3:55 PM, Gauthier, Dave dave.gauth...@intel.com wrote:
 They just called the meeting, or at least that part of it.  There seems to be 
 a battle brewing, some MySQL advocates are angry, concerned, fearful, ... I 
 dont know why for sure.

in places like that it is inevitable. there's always going to be crowd
that will fear of change. They don't generate reasonable opinions, it
is the fear of change.
It might be hard to fight that, since managers will make the decision
based on reports that they can trust. Scared folks often generate a
lot of feedback. Just like in politics :)


 My managers, who advocate my position and PG are preparing, but the decision 
 will be made by higher-ups who really don't know anything about DBs.  They 
 just talk in terms of risk and cost and  schedules and yes, licenses.  So 
 I'll let them articulate the defense of PG on those terms.  I'm just an 
 engineer.  I've been feeding them the valuable input I've been getting from 
 this forumn and t hanks to all who have contributed.  Really!

Well, give them the best report ever. Also, skip the crap they won't
understand. Try writing first the stuff they will understand, than
give them reason why they would want to consider it - in their own
language. Skip the engineering stuff. Managers often have a very short
focus span. As soon as it smells like something they don't understand,
they will stop reading it.


-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-17 Thread Gauthier, Dave
Actually, the DB I'm working on is rather small but has a somewhat complex 
system of constraints and triggers that maintain the data.  Queries will 
outnumber writes (20x at least).  And the DB has to be mirrorred at a sister 
site a couple thousand miles away, so I'm looking for a robust DB replication 
system for that.  

These are the key points they will be worried about...
- DB up time (most important), including recovery time after disasters (e.g. 
power outages)
- Data integrity.  I'm addressing this with constraints and using triggers to 
populate columns with derived data.
- Data Quality.  NO CORRUPT TABLES / INDEXES
- Retrofitting existing apps to work with PG.  Perl/DBI is a subtle change in 
the DBD designation.  Some Tcl-MySQL code is tougher.  I'm proposing changing 
everything to go through ODBC as a standard now, and for the future.
- Cost of maintainence.  Do I have to babysit this DB 4 hours every day, or 
does it run by itself?  Is this like Oracle where we have to hire professional 
24x7 DBAs, or is this hands-off?  That kind of question.

I have a DB up and working.  Runs great, no problems, but very lightly loaded 
and/or used at this time.  Having worked with PG in the past, I'm not worried 
about this piece.

I am more concerned with getting a robust DB replication system up and running. 
 Bucardo looks pretty good, but I've just started looking at the options.  Any 
suggestions?

Thanks!


-Original Message-
From: Erik Jones [mailto:ejo...@engineyard.com] 
Sent: Thursday, December 17, 2009 4:42 AM
To: Craig Ringer
Cc: Gauthier, Dave; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Justifying a PG over MySQL approach to a project


On Dec 16, 2009, at 10:30 PM, Craig Ringer wrote:

 - If you don't care about your data, MySQL used with MyISAM is *crazy* fast 
 for lots of small simple queries.

This one causes me no end of grief as too often it's simply touted as MyISAM 
is fast(er) while leaving of the bit about for lots of small, simple 
queries.  Developers then pick MySQL with MyISAM storage and then scratch 
their heads saying, But!  I heard it was faster..., when I tell them the 
reason their app is crawling is because they have even moderately complex reads 
or writes starving out the rest of their app thanks to the table locks required 
by MyISAM.  As you mentioned, for the type of active workloads that MyISAM is 
good for, you might as well just use memcache over something more reliable 
and/or concurrent, or even a simple key-value or document store if you really 
don't need transactions.

Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-17 Thread Gauthier, Dave
How difficult is it to switch the master's hat from one DB instance to another? 
 Let's say the master in a master-slave scenario goes down but the slave is 
fine.  Can I designate the slave as being the new master, use it for 
read/write, and then just call the broken master the new slave once it comes 
back to life (something like that)?



-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Greg Sabino Mullane
Sent: Thursday, December 17, 2009 9:28 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Justifying a PG over MySQL approach to a project


-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160   


 - MySQL is horizontally scalable via clustering and multi-master
 replication (though you must beware of numerous gotchas). PostgreSQL can
 be used with read-only slaves via Slony/Bucardo/etc replication, but is
 limited to a single authoriative master.

 (There's work ongoing to enable readonly hot standby slaves with
 failover, but no multi-master is on the horizion).

Well that's refreshing: usually Bucardo is mistaken for a system that
only does master-master and not master-slave, rather than vice-versa. :)
You can have two authoritative masters with Bucardo, in addition to
any number of slaves radiating from one or both of those (as well as
just simple master-slaves).

 - It's a cool tool when you want to query and integrate data from all
 sorts of disparate sources, thanks to its support for pluggable storage
 engines. If you want something for data analysis and integration rather
 than safe storage it's well worth looking at.

What sort of sources? I'm curious here to find areas we can improve upon.

- --
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 200912170927
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAksqP9kACgkQvJuQZxSWSshbUQCg3CfvpeivDi6gg2bkr74I17Qe
RKAAnRu3GTUQ3Bg3R2Fq3eOsgK4N0xd1
=5r9R
-END PGP SIGNATURE-



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to get text for a plpgsql variable from a file.

2009-12-17 Thread Erik Jones

On Dec 16, 2009, at 11:19 AM, Erwin Brandstetter wrote:

 Hello,
 
   I need a long text form from a file in my plpgsql variable.
 Can anyone think of a more straightforward way to read the file than
 the following:
 
 
 CREATE FUNCTION test() RETURNS void AS
 $BODY$
 DECLARE
   mytxt text;
 BEGIN
 
 CREATE TEMP TABLE x (x text);
 COPY x from '/path/to/myfile.txt';
 mytxt := (SELECT x from x);
 
 ...
 
 END;
 $BODY$
  LANGUAGE 'plpgsql' VOLATILE;

If you're allowed, you can use an untrusted procedural language.

Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-17 Thread Richard Broersma
On Thu, Dec 17, 2009 at 8:23 AM, Gauthier, Dave dave.gauth...@intel.com wrote:
 How difficult is it to switch the master's hat from one DB instance to 
 another?  Let's say the master in a master-slave scenario goes down but the 
 slave is fine.  Can I designate the slave as being the new master, use it for 
 read/write, and then just call the broken master the new slave once it comes 
 back to life (something like that)?


I know someone that uses a revolving Sony Master-Slave setup between
China and the US.  During the US working hour the US server is the
master, during the working hours of China it becomes the Master.

Of course the person how constructed this system mentioned it was
woefully challenging.  In his case, not only was the slony
configuration difficult but also finding and keeping stable
communication path-ways between China and the US.

-- 
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-17 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-   
Hash: RIPEMD160  


 How difficult is it to switch the master's hat from one DB instance 
 to another?  Let's say the master in a master-slave scenario goes   
 down but the slave is fine.  Can I designate the slave as being the 
 new master, use it for read/write, and then just call the broken
 master the new slave once it comes back to life (something like that)?

Sure. Bucardo slaves are not changed at all, so they are already 
read/write and don't need anything special done to unslave them.

One possible way to handle the scenario is:

Assuming three servers:
* A (master) sends changes to B, receives read/write queries
* B (slave) has transaction_read_only set to true, receives read queries
* C has the Bucardo database and daemon

Box A goes down suddenly.

* Stop Bucardo on box C
* Flip the boxes around in the bucardo.db table
* Do a 'bucardo_ctl validate sync all'
  (This will create the needed triggers on B)
* Set B's transaction_read_only to false
* Point your apps at B instead of A for read/write queries

When A comes back up:

* DROP SCHEMA bucardo CASCADE; (drops all triggers)
* Set transaction_read_only to true
* Start Bucardo on C
* Once caught up, point read-only queries to A

If you are in a rush, you point things to B immediately after A fails,
but you'll have to recopy the entire table data to the slave, as the
triggers won't be in place yet.

- --
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 200912171153
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAksqYqUACgkQvJuQZxSWSsjZtQCfTwbI3f9W0z+82IU7lL+2LwNK
aUYAnj3AMjRDOeFIuHDee4JJemneArie
=75Ho
-END PGP SIGNATURE-



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-17 Thread Dimitri Fontaine
Gauthier, Dave dave.gauth...@intel.com writes:

 I am more concerned with getting a robust DB replication system up and
 running.  Bucardo looks pretty good, but I've just started looking at
 the options.  Any suggestions?

Master Slave replication? Meaning no writes on the sister site.

If yes, consider Londiste from Skytools. Easy to setup and maintain, and
robust.
  http://wiki.postgresql.org/wiki/Skytools
  http://wiki.postgresql.org/wiki/Londiste_Tutorial

Regards,
-- 
dim

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-17 Thread Kevin Kempter
On Thursday 17 December 2009 09:51:19 Richard Broersma wrote:
 On Thu, Dec 17, 2009 at 8:23 AM, Gauthier, Dave dave.gauth...@intel.com 
wrote:
  How difficult is it to switch the master's hat from one DB instance to
  another?  Let's say the master in a master-slave scenario goes down but
  the slave is fine.  Can I designate the slave as being the new master,
  use it for read/write, and then just call the broken master the new slave
  once it comes back to life (something like that)?

This is very easy with SLONY.  If the master is truly 'dead' you can run a 
SLONY  'failover' command like this (note these are slonik commands where node 
1 is the 'dead' master and node 2 is the current slave):

echo 'Preparing to failover (set id = 1, backup node = 2)';
failover (id = 1, backup node = 2);
echo 'Done';
echo 'Preparing to drop node (set id = 1, event node = 2)';
drop node (id = 1, event node = 2);
echo 'Done';
echo 'Failover complete';



at this point the dead master node is no longer part of the replication 
cluster and the slave is the new master

Once the dead node is rebuilt then you simply add it to the replication 
cluster as a new slave node


Also, if you just wanted to move the master, changing the existing master 
into a slave in the process you can do this:

lock set (id = 1, origin = 1);
wait for event (origin = 1, confirmed = 2);
echo 'set locked';
move set (id = 1, old origin = 1, new origin = 2);
wait for event (origin = 1, confirmed = 2);
echo 'switchover complete';



 
 I know someone that uses a revolving Sony Master-Slave setup between
 China and the US.  During the US working hour the US server is the
 master, during the working hours of China it becomes the Master.
 
 Of course the person how constructed this system mentioned it was
 woefully challenging.  In his case, not only was the slony
 configuration difficult but also finding and keeping stable
 communication path-ways between China and the US.
 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump and ON DELETE CASCADE problem

2009-12-17 Thread CG


--- On Tue, 12/15/09, Adrian Klaver akla...@comcast.net wrote:

 From: Adrian Klaver akla...@comcast.net
 Subject: Re: [GENERAL] pg_dump and ON DELETE CASCADE problem
 To: cgg...@yahoo.com
 Cc: postgresql listserv pgsql-general@postgresql.org, Craig Ringer 
 cr...@postnewspapers.com.au, Scott Marlowe scott.marl...@gmail.com
 Date: Tuesday, December 15, 2009, 6:53 PM
 On Tuesday 15 December 2009 2:33:39
 pm CG wrote:
 
 
  Bingo. Showed right up. I did a reindex, and now it
 shows up searching via
  sequential scan or index scan.
 
  So that's pretty scary to have a corrupted index. Once
 I reindexed, I'm
  able to see /a lot/ of data I couldn't before. This is
 the first time in 9
  years that I've been bitten by PostgreSQL, and this
 one HURT.
 
  PostgreSQL didn't crash, so there was no indication of
 failure until the
  demp-reload. To quote from the masters: Although in
 theory this should
  never happen, in practice indexes may become corrupted
 due to software bugs
  or hardware failures. I'm reasonably certain that the
 hardware for the
  server is sound. No crashes, no alarms... That leaves
 sofware bugs.
 
  We're running PostgreSQL 8.4.1. I don't see any
 smoking gun bugfixes in
  8.4.2, but we'll upgrade ASAP anyway...
 
  What are your suggestions for how to proceed?
 
 Interesting, though something is still bothering me. To
 quote from one of your 
 posts upstream;
 
 That was the same failure I got the previous night. I go
 to the live database 
 and rows with that key are /not/ in either one of those
 tables. They /were/ in 
 the tables at one point. I have an ON DELETE trigger that
 copies deleted rows 
 into another table, so I can see that a row with that key
 once existed in those 
 tables.
  
 Would seem that the rows where deleted and should not be
 there when the table 
 was reindexed. Are the 'new' rows you are seeing also in
 the delete table?
 

select foo from bar where baz = 'key';

I was mistaken when I said that the row was not in the table. If I had an index 
on baz, and the index was corrupted, that SQL would return 0 rows leading me to 
believe that there were no rows in the table with that key.

And, the reason for that row remaining in the database after its foreign keyed 
parent row was deleted was because the delete operation was depending on the 
index to find the rows to delete, and that index was corrupt.

Of course, I had no idea that the index was corrupt when I made my first post.

On the table that has the martian row, there is no delete storage. Since the 
data in the table is trigger-generated for FTI searches, there's no reason to 
keep that data around.

I'm still faced with the unpleasant and unresolved issue of why the index was 
corrupted in the first place.



  

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Trigger function language

2009-12-17 Thread Vincenzo Romano
Hi all.
Is there any performance difference in writing trigger function in SQL
rather than PL/PgSQL?
In my case the body would actually be 1 line.

-- 
Vincenzo Romano
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-17 Thread John R Pierce

Gauthier, Dave wrote:


Hi Everyone:

Tomorrow, I will need to present to a group of managers (who know 
nothing about DBs) why I chose to use PG over MySQL in a project, 
MySQL being the more popular DB choice with other engineers, and 
managers fearing things that are “different” (risk). I have a few hard 
tecnical reasons (check constraint, deferred constraint checking, 
array data type), but I’m looking for a “it’s more reliable” reasons. 
Again, the audience is managers. Is there an impartial, 3^rd party 
evaluation of the 2 DBs out there that identifies PG as being more 
reliable? It might mention things like fewer incidences of corrupt 
tables/indexes, fewer deamon crashes, better recovery after system 
crashes, etc... ?





let me just say this one word about that.




ORACLE





i think its a pretty safe assumption that Oracle will not be good for MySQL.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger function language

2009-12-17 Thread Tom Lane
Vincenzo Romano vincenzo.rom...@notorand.it writes:
 Is there any performance difference in writing trigger function in SQL
 rather than PL/PgSQL?

SQL functions don't support being called as triggers, so performance
questions are a bit beside the point ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres 8.2.4 cores on SUN

2009-12-17 Thread John R Pierce

Abraham, Danny wrote:


“postgres” cores on SUN 2.9. Disk is in 100% IO.

(dbx) where

[1] 0x65eacdd4(0x7fffa36c, 0x400, 0x10032d4e8, 0x1, 
0x48, 0xd6), at 0x65eacdd4


[2] XLogFileInit(0x48, 0xd6, 0x7fffa85d, 0x1, 0x488000, 0x0), 
at 0x10007955c


...

Any idea?



now without a lot more input.

where did this postgres 8.2.4 come from, how is it configured, what 
hardware is it running on, what operation(s) are you doing when it 
'cores', what is the disk doing thats 10)% busy (eg, what file(s), etc) 
was this system stable prior to this happening, etc etc etc.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to remove non-UTF values from a table?

2009-12-17 Thread Bruce Momjian
Tom Lane wrote:
 Raymond O'Donnell r...@iol.ie writes:
  On 16/12/2009 15:01, Richard Broersma wrote:
  It looks like the future 8.5 release will be able to
  preform an in-place upgrade on 8.4.
 
  Really? That would be *wonderful*. I know it's impossible to be
  definitive, but how likely would you say this is?
 
 It's wishful thinking.  We tried to have that for 8.4, and failed: there
 are so many constraints on pg_upgrade that it's not a usable mainstream
 solution.  So far as I've seen there has been no work done that would
 make it any more reliable for 8.5.

I do not share Tom's conclusions.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to remove non-UTF values from a table?

2009-12-17 Thread Bruce Momjian
Tom Lane wrote:
 Raymond O'Donnell r...@iol.ie writes:
  On 16/12/2009 15:01, Richard Broersma wrote:
  It looks like the future 8.5 release will be able to
  preform an in-place upgrade on 8.4.
 
  Really? That would be *wonderful*. I know it's impossible to be
  definitive, but how likely would you say this is?
 
 It's wishful thinking.  We tried to have that for 8.4, and failed: there
 are so many constraints on pg_upgrade that it's not a usable mainstream
 solution.  So far as I've seen there has been no work done that would
 make it any more reliable for 8.5.

Some people just enjoy discouraging people.

pg_migrator (not pg_upgrade) has been used by many people to migrate
from 8.3 to 8.4.  I just helped someone yesterday with a migration. 
pg_migrator threw an error because they had reindexed pg_largeobject,
and pg_migrator was not prepared to handle that.  They also had an old
cluster with a higher autovacuum_freeze_max_age than the new cluster,
causing autovacuum freeze to run during the upgrade and remove clog
files.  These are unusual cases, so I assume there are many people who
successfully use pg_migrator that I never hear from.

Both of these issues are fixed in pg_migrator 8.4.6, released yesterday.
I also talked to Alvaro about using fixed oids to remove these migration
restrictions in 8.5:

o  a user-defined composite data type
o  a user-defined array data type
o  a user-defined enum data type

So, pg_migrator is a work in progress.  It current requires migration
testing and should be used by experienced Postgres users.  Someday
pg_migrator will be easier to use and more reliable.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to remove non-UTF values from a table?

2009-12-17 Thread Bruce Momjian
Bruce Momjian wrote:
 pg_migrator (not pg_upgrade) has been used by many people to migrate
 from 8.3 to 8.4.  I just helped someone yesterday with a migration. 
 pg_migrator threw an error because they had reindexed pg_largeobject,
 and pg_migrator was not prepared to handle that.  They also had an old
 cluster with a higher autovacuum_freeze_max_age than the new cluster,
 causing autovacuum freeze to run during the upgrade and remove clog
 files.  These are unusual cases, so I assume there are many people who
 successfully use pg_migrator that I never hear from.
 
 Both of these issues are fixed in pg_migrator 8.4.6, released yesterday.
 I also talked to Alvaro about using fixed oids to remove these migration
 restrictions in 8.5:
 
 o  a user-defined composite data type
 o  a user-defined array data type
 o  a user-defined enum data type
 
 So, pg_migrator is a work in progress.  It current requires migration
 testing and should be used by experienced Postgres users.  Someday
 pg_migrator will be easier to use and more reliable.

Oh, and pg_migrator 8.4.6 works for migrations from 8.4 to 8.5 CVS.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-17 Thread Madison Kelly

Gauthier, Dave wrote:

Hi Everyone:

 

Tomorrow, I will need to present to a group of managers (who know 
nothing about DBs) why I chose to use PG over MySQL in a project, MySQL 
being the more popular DB choice with other engineers, and managers 
fearing things that are “different” (risk).  I have a few hard tecnical 
reasons (check constraint, deferred constraint checking, array data 
type), but I’m looking for a “it’s more reliable” reasons.  Again, the 
audience is managers.  Is there an impartial,  3^rd party evaluation of 
the 2 DBs out there that identifies PG as being more reliable?  It might 
mention things like fewer incidences of corrupt tables/indexes, fewer 
deamon crashes, better recovery after system crashes, etc... ?


 


Thanks !


There is a current question about the survivability of MySQL right now 
with the potential sale of MySQL. I would not bank on MySQL for any 
long-term project. I am sure that MySQL will live in the long run, but 
they may well be turbulent times ahead if whomever comes to own MySQL 
decides to neglect or kill it and the source gets forked.


Madi

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server Requirements

2009-12-17 Thread Madison Kelly

Christine Penner wrote:

Hi,

If we have clients that are going to buy new computers or upgrade 
current ones, what we can recommend to them for optimal system 
performance to run Postgres. These can be servers or desktop PCs. We can 
have from 1-10 users in at a time. At this point all of our database's 
are small but that can change of course.


Hi Christine,

  The problem with this question is that it is far too vague to be able 
to answer in any meaningful way. You need to add some information to 
your request. Things like:


- Data set size; How many tables, how many columns, how fast will it 
grow, what kind of data are in the columns?
- Performance; are you using triggers, functions, a lot of complex or 
simple queries, lots of UPDATEs, INSERTs and DELETEs?
- Redundancy; How do you plan to backup the data? What performance 
criteria do you have? What's your acceptable down time in the case of a 
failure?
- Interface; Users is one thing, but how many transactions will these 
users incur?
- Budget; How much is your client willing to invest? What about 
long-term maintenance or support contracts?

- Environment; What operating system will postgres run on?

  Answer these questions and you will find the hardware requirements 
will likely begin to become self-evident. If not, ask here again with 
this info and we'll be much more able to help. :)


Madi

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] alter table performance

2009-12-17 Thread Antonio Goméz Soto

Hi,

I am regularly altering tables, adding columns setting default values etc.
This very often takes a very long time and is very disk intensive, and this
gets pretty annoying.

Things are hampered by the fact that some of our servers run PG 7.3

Suppose I have a table and I want to add a non NULL column with a default value.
What I normally do is:

alter table person add column address varchar(64);
update person set address = '' where address IS NULL;
alter table person alter column address set not NULL;
alter table person alter column address set default '';

When the table contains millions of records this takes forever.

Am I doing something wrong? Do other people have the same problems?

Thanks,
Antonio



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] feature request: create table with unique constraint

2009-12-17 Thread wumpus
begin ;

create table a (
  id serial primary key
  , info text not null -- not interested here
) ;

create table b (
  id serial primary key
  , a_id int not null references a
  , info text not null -- not interested here
  , actual bool not null
) ;

create unique index b_actual on b
  ( a_id , ( case when actual then 0 else id end ) )
;

create table c (
  id serial primary key
  , a_id int not null references a
  , info text not null -- not interested here
  , actual bool not null
  , unique ( a_id , ( case when actual then 0 else id end ) )
) ;


Why can i not define the unique constraint in the
create table? I know this is described in the manual but
why this is.



thanks
Andreas


btw Why must i give the index a name. The name should
be created automatically if none is specified as i
define a unique constraint in the create table.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] alter table performance

2009-12-17 Thread Alan Hodgson
On Thursday 17 December 2009, Antonio Goméz Soto 
antonio.gomez.s...@gmail.com wrote:
 Hi,

 I am regularly altering tables, adding columns setting default values
 etc. This very often takes a very long time and is very disk intensive,
 and this gets pretty annoying.

 Things are hampered by the fact that some of our servers run PG 7.3

 Suppose I have a table and I want to add a non NULL column with a default
 value. What I normally do is:

 alter table person add column address varchar(64);
 update person set address = '' where address IS NULL;
 alter table person alter column address set not NULL;
 alter table person alter column address set default '';

 When the table contains millions of records this takes forever.

 Am I doing something wrong? Do other people have the same problems?

 Thanks,
 Antonio

You could speed it up:

- drop all indexes on the table
- alter table person add column address varchar(64) not null default ''
- recreate the indexes

It will require exclusive access to the table for the duration, but it'll be 
a lot faster and result in a lot less bloat than what you're doing now. It 
still has to rewrite the whole table, but it's a lot faster than UPDATE. 

(I have no idea if this works on 7.3).

-- 
No animals were harmed in the recording of this episode. We tried but that 
damn monkey was just too fast.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] alter table performance

2009-12-17 Thread Thomas Kellerer

Antonio Goméz Soto wrote on 17.12.2009 22:26:

Hi,

I am regularly altering tables, adding columns setting default values etc.
This very often takes a very long time and is very disk intensive, and this
gets pretty annoying.

Things are hampered by the fact that some of our servers run PG 7.3

Suppose I have a table and I want to add a non NULL column with a
default value.
What I normally do is:

alter table person add column address varchar(64);
update person set address = '' where address IS NULL;
alter table person alter column address set not NULL;
alter table person alter column address set default '';

When the table contains millions of records this takes forever.

Am I doing something wrong? Do other people have the same problems?


What's wrong with:

alter table person add column address varchar(64) not null default '';

Although I don't know if such a pre-historic version like 7.3 would support 
that.

It works for 8.4 and I believe this was working with 8.3 and 8.2 as well

Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump and ON DELETE CASCADE problem

2009-12-17 Thread Adrian Klaver



- CG cgg...@yahoo.com wrote:

 --- On Tue, 12/15/09, Adrian Klaver akla...@comcast.net wrote:
 
  From: Adrian Klaver akla...@comcast.net
  Subject: Re: [GENERAL] pg_dump and ON DELETE CASCADE problem
  To: cgg...@yahoo.com
  Cc: postgresql listserv pgsql-general@postgresql.org, Craig
 Ringer cr...@postnewspapers.com.au, Scott Marlowe
 scott.marl...@gmail.com
  Date: Tuesday, December 15, 2009, 6:53 PM
  On Tuesday 15 December 2009 2:33:39
  pm CG wrote:
  
  
   Bingo. Showed right up. I did a reindex, and now it
  shows up searching via
   sequential scan or index scan.
  
   So that's pretty scary to have a corrupted index. Once
  I reindexed, I'm
   able to see /a lot/ of data I couldn't before. This is
  the first time in 9
   years that I've been bitten by PostgreSQL, and this
  one HURT.
  
   PostgreSQL didn't crash, so there was no indication of
  failure until the
   demp-reload. To quote from the masters: Although in
  theory this should
   never happen, in practice indexes may become corrupted
  due to software bugs
   or hardware failures. I'm reasonably certain that the
  hardware for the
   server is sound. No crashes, no alarms... That leaves
  sofware bugs.
  
   We're running PostgreSQL 8.4.1. I don't see any
  smoking gun bugfixes in
   8.4.2, but we'll upgrade ASAP anyway...
  
   What are your suggestions for how to proceed?
  
  Interesting, though something is still bothering me. To
  quote from one of your 
  posts upstream;
  
  That was the same failure I got the previous night. I go
  to the live database 
  and rows with that key are /not/ in either one of those
  tables. They /were/ in 
  the tables at one point. I have an ON DELETE trigger that
  copies deleted rows 
  into another table, so I can see that a row with that key
  once existed in those 
  tables.
   
  Would seem that the rows where deleted and should not be
  there when the table 
  was reindexed. Are the 'new' rows you are seeing also in
  the delete table?
  
 
 select foo from bar where baz = 'key';
 
 I was mistaken when I said that the row was not in the table. If I had
 an index on baz, and the index was corrupted, that SQL would return 0
 rows leading me to believe that there were no rows in the table with
 that key.
 
 And, the reason for that row remaining in the database after its
 foreign keyed parent row was deleted was because the delete operation
 was depending on the index to find the rows to delete, and that index
 was corrupt.
 
 Of course, I had no idea that the index was corrupt when I made my
 first post.
 
 On the table that has the martian row, there is no delete storage.
 Since the data in the table is trigger-generated for FTI searches,
 there's no reason to keep that data around.

Would it be possible to see the table schemas and indices ?

 
 I'm still faced with the unpleasant and unresolved issue of why the
 index was corrupted in the first place.
 
 

Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] feature request: create table with unique constraint

2009-12-17 Thread Tom Lane
wum...@z1p.biz writes:
   , unique ( a_id , ( case when actual then 0 else id end ) )

 Why can i not define the unique constraint in the
 create table?

The syntax for a unique constraint in CREATE TABLE is defined by the SQL
standard, and it doesn't include expressions.  More than the syntax, a
unique constraint also ought to show up in the information_schema views,
and those don't have the ability to handle expressions.  So we have the
capability, but it's only available via CREATE INDEX which is outside
the standard.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] alter table performance

2009-12-17 Thread Tom Lane
Thomas Kellerer spam_ea...@gmx.net writes:
 What's wrong with:
 alter table person add column address varchar(64) not null default '';

This:

regression=# alter table person add column address varchar(64) not null default 
'';
ERROR:  Adding columns with defaults is not implemented.
Add the column, then use ALTER TABLE SET DEFAULT.
regression=# \q

This is just one of many many things that could be improved by getting
off of 7.3.  In general, complaining about performance (or features)
of a seven-year-old, long since EOL'd release is not a productive use of
anybody's time.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing PL/pgSQL by default

2009-12-17 Thread Bruce Momjian
Bruce Momjian wrote:
 Tom Lane wrote:
  Bruce Momjian br...@momjian.us writes:
   I installed PL/pgSQL by default via initdb with the attached patch.  The
   only problem is that pg_dump still dumps out the language creation:
 CREATE PROCEDURAL LANGUAGE plpgsql;
 ALTER PROCEDURAL LANGUAGE plpgsql OWNER TO postgres;
   What is odd is that I used the same process that initdb uses to create
   other objects.  Does anyone know why this is happening?
  
  I think pg_dump pays attention to what schema the objects are in,
  and that's most likely creating them in PUBLIC.  Try adding
  set search_path = pg_catalog.
  
  It's not impossible that we'll have to tweak pg_dump a bit; it's
  never had to deal with languages that shouldn't be dumped ...
 
 I found that pg_dump tests for pg_language.lanispl == true, which is
 true for all the stored procedure languages.  I can easily special case
 plpgsql, or check for FirstNormalObjectId, though I don't see that used
 in pg_dump currently.
 
 A more difficult issue is whether we should preserve the fact that
 plpgsql was _removed_ in the pg_dump output, i.e, if someone removes
 plpgsql from a database, do we issue a DROP LANGUAGE in pg_dump?  I
 don't remember us having to deal with anything like this before.

OK, the attached patch installs plpgsql by default from initdb, and
supresses the dumping of CREATE LANGUAGE in 8.5 and in 8.3/8.4 if binary
upgrade is used (because you know you are upgrading to a release that
has plpgsql installed by default).  The 8.3/8.4 is necessary so the
schema load doesn't generate any errors and cause pg_migrator to exit.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +
Index: doc/src/sgml/installation.sgml
===
RCS file: /cvsroot/pgsql/doc/src/sgml/installation.sgml,v
retrieving revision 1.333
diff -c -c -r1.333 installation.sgml
*** doc/src/sgml/installation.sgml	15 Dec 2009 22:59:53 -	1.333
--- doc/src/sgml/installation.sgml	17 Dec 2009 23:35:36 -
***
*** 2266,2279 
   is commandcreatelang/command failing with unusual errors.
   For example, running as the owner of the PostgreSQL installation:
  screen
! -bash-3.00$ createlang plpgsql template1
! createlang: language installation failed: ERROR:  could not load library /opt/dbs/pgsql748/lib/plpgsql.so: A memory address is not in the address space for the process.
  /screen
  Running as a non-owner in the group posessing the PostgreSQL
  installation:
  screen
! -bash-3.00$ createlang plpgsql template1
! createlang: language installation failed: ERROR:  could not load library /opt/dbs/pgsql748/lib/plpgsql.so: Bad address
  /screen
   Another example is out of memory errors in the PostgreSQL server
   logs, with every memory allocation near or greater than 256 MB
--- 2266,2279 
   is commandcreatelang/command failing with unusual errors.
   For example, running as the owner of the PostgreSQL installation:
  screen
! -bash-3.00$ createlang plperl template1
! createlang: language installation failed: ERROR:  could not load library /opt/dbs/pgsql748/lib/plperl.so: A memory address is not in the address space for the process.
  /screen
  Running as a non-owner in the group posessing the PostgreSQL
  installation:
  screen
! -bash-3.00$ createlang plperl template1
! createlang: language installation failed: ERROR:  could not load library /opt/dbs/pgsql748/lib/plperl.so: Bad address
  /screen
   Another example is out of memory errors in the PostgreSQL server
   logs, with every memory allocation near or greater than 256 MB
Index: src/bin/initdb/initdb.c
===
RCS file: /cvsroot/pgsql/src/bin/initdb/initdb.c,v
retrieving revision 1.178
diff -c -c -r1.178 initdb.c
*** src/bin/initdb/initdb.c	11 Dec 2009 03:34:56 -	1.178
--- src/bin/initdb/initdb.c	17 Dec 2009 23:35:36 -
***
*** 176,181 
--- 176,182 
  static void setup_privileges(void);
  static void set_info_version(void);
  static void setup_schema(void);
+ static void load_plpgsql(void);
  static void vacuum_db(void);
  static void make_template0(void);
  static void make_postgres(void);
***
*** 1894,1899 
--- 1895,1925 
  }
  
  /*
+  * load PL/pgsql server-side language
+  */
+ static void
+ load_plpgsql(void)
+ {
+ 	PG_CMD_DECL;
+ 
+ 	fputs(_(loading PL/pgSQL server-side language ... ), stdout);
+ 	fflush(stdout);
+ 
+ 	snprintf(cmd, sizeof(cmd),
+ 			 \%s\ %s template1 %s,
+ 			 backend_exec, backend_options,
+ 			 DEVNULL);
+ 
+ 	PG_CMD_OPEN;
+ 
+ 	PG_CMD_PUTS(CREATE LANGUAGE plpgsql;\n);
+ 
+ 	PG_CMD_CLOSE;
+ 
+ 	check_ok();
+ }
+ 
+ /*
   * clean everything up in template1
   */
  static void
***
*** 3126,3131 
--- 3152,3159 

Re: [GENERAL] Table Partitioning Advice Request

2009-12-17 Thread Scott Marlowe
On Thu, Dec 17, 2009 at 8:29 AM, Vincenzo Romano
vincenzo.rom...@notorand.it wrote:
 2009/12/17 Sam Jas samja...@yahoo.com

 Rule is not advisable, Trigger is the best solution.

 Does the trigger solution need the TABLE CHECK constraint?
 It looks to me it won't.

The table check constraint is used during selects to pick just the
child tables needed to deliver the data you are asking for, assuming
the key value is in the where clause.

The main reason to avoid rules is that they're much less efficient
than triggers.  We use partitioning at work for our stats db, and
partition by day, and we have about 2 years worth of stats data, so
our insert trigger has about 700 if / elseif / else conditions in it,
and the trigger is still very very fast.  I tried it with rules before
and it was way too slow.

Note that the absolute fastest way to insert to the right child table
is to do it from the application side, choosing the right child table
there.

We automated our trigger creation.  While the trigger is huge in terms
of number of lines, speed wise the creation of the new trigger each
night at midnight is measured in milliseconds

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] alter table performance

2009-12-17 Thread Antonio Goméz Soto

Op 17-12-09 23:46, Tom Lane schreef:

This is just one of many many things that could be improved by getting
off of 7.3.  In general, complaining about performance (or features)
of a seven-year-old, long since EOL'd release is not a productive use of
anybody's time.
   


I'm sorry, didn't mean to.

I was just checking if I did it the right way, or if it was supposed to 
be that slow.


I know we should upgrade the client machines, but in some cases we just 
can't.


Anyway, thanks for giving me the answer.

Antonio


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extended Query using the Frontend/Backend Protocol 3.0

2009-12-17 Thread John DeSoi

On Dec 17, 2009, at 11:13 AM, Raimon Fernandez wrote:

 I'm trying to integrate the extended query protocol with my libraries.
 
 I'm sending a simple SELECT to validate the method, but I'm getting an 
 Invalid Message Format.

I did not add up your byte count, but maybe this will help:


(write-byte p stream)
(write-int32 (+ int32-length (length name) 1 (length sql-string) 1 int16-length 
(* int32-length param-count)) stream) 
(write-cstring name stream)
(write-cstring sql-string stream)
(write-int16 param-count stream)



John DeSoi, Ph.D.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-17 Thread Scott Marlowe
On Thu, Dec 17, 2009 at 12:35 PM, Madison Kelly li...@alteeve.com wrote:
 Gauthier, Dave wrote:

 Hi Everyone:


 Tomorrow, I will need to present to a group of managers (who know nothing
 about DBs) why I chose to use PG over MySQL in a project, MySQL being the
 more popular DB choice with other engineers, and managers fearing things
 that are “different” (risk).  I have a few hard tecnical reasons (check
 constraint, deferred constraint checking, array data type), but I’m looking
 for a “it’s more reliable” reasons.  Again, the audience is managers.  Is
 there an impartial,  3^rd party evaluation of the 2 DBs out there that
 identifies PG as being more reliable?  It might mention things like fewer
 incidences of corrupt tables/indexes, fewer deamon crashes, better recovery
 after system crashes, etc... ?


 Thanks !

 There is a current question about the survivability of MySQL right now with
 the potential sale of MySQL. I would not bank on MySQL for any long-term
 project. I am sure that MySQL will live in the long run, but they may well
 be turbulent times ahead if whomever comes to own MySQL decides to neglect
 or kill it and the source gets forked.

It's important to separate out MySQL AB the company, owned by Sun,
soon to be owned by Oracle, from MySQL the GPL licensed database,
which may or may not allow you to distribute your own commercial code
without buying a license.  Given the OSS License loophole, and the
fact that many of those OSS licenses do NOT require the release of
source code, there's every possibiliy you could release your
commercial code under the BSD code to yourself, and then give only the
compiled code to customers and you'd technically be safe distributing
MySQL with it.

There are several companies in the MySQL biosphere that are releasing
their own forks of MySQL with lots of bug fixes, and I have no doubt
that MySQL the GPL database will continue to be available as a GPL
product for quite some time.  However, the availability of
commercially licensed MySQL may or may not continue based on the
business needs of Oracle.

For in house use only, this is a non-issue, as the GPL only affects
distribution of MySQL, not internal usage.

The biggest argument I'd use against MySQL in general is that
PostgreSQL is a better database for any time your data and its
integrity are important.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server Requirements

2009-12-17 Thread Scott Marlowe
On Wed, Dec 16, 2009 at 4:21 PM, Christine Penner
christ...@ingenioussoftware.com wrote:
 Hi,

 If we have clients that are going to buy new computers or upgrade current
 ones, what we can recommend to them for optimal system performance to run
 Postgres. These can be servers or desktop PCs. We can have from 1-10 users
 in at a time. At this point all of our database's are small but that can
 change of course.

Like Craig said, your description is pretty vague, so it's hard to say
what you need most to make your pgsql server run fast.

Generally, the order of things to do to make it fast are:  Add memory,
add hard drives, add battery backed caching RAID controllers, add more
CPU horsepower.

But it depends largely on your usage patterns.  So, what are you doing
with your db?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-17 Thread Gauthier, Dave
One concern I have about these trigger based replication systems is that I fear 
it may ping the slave for each and every DML statement separately in time and 
in a transaction.  My slave will literally be 1400 miles away and all 
replication communications will be over the net.  If I have a transaction which 
has 1000 DML statements in it, is this thing going to update the slave 1000 
times separately over the net ? (I may not live long enough to see it finish)  
Or will it be smart enough to wait until I commit then send over a single 
bundle of 1000 DML?  The time diff will be more than significant.

Thanks for all the great input on this!

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Greg Sabino Mullane
Sent: Thursday, December 17, 2009 11:58 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Justifying a PG over MySQL approach to a project


-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


 How difficult is it to switch the master's hat from one DB instance
 to another?  Let's say the master in a master-slave scenario goes
 down but the slave is fine.  Can I designate the slave as being the
 new master, use it for read/write, and then just call the broken
 master the new slave once it comes back to life (something like that)?

Sure. Bucardo slaves are not changed at all, so they are already
read/write and don't need anything special done to unslave them.

One possible way to handle the scenario is:

Assuming three servers:
* A (master) sends changes to B, receives read/write queries
* B (slave) has transaction_read_only set to true, receives read queries
* C has the Bucardo database and daemon

Box A goes down suddenly.

* Stop Bucardo on box C
* Flip the boxes around in the bucardo.db table
* Do a 'bucardo_ctl validate sync all'
  (This will create the needed triggers on B)
* Set B's transaction_read_only to false
* Point your apps at B instead of A for read/write queries

When A comes back up:

* DROP SCHEMA bucardo CASCADE; (drops all triggers)
* Set transaction_read_only to true
* Start Bucardo on C
* Once caught up, point read-only queries to A

If you are in a rush, you point things to B immediately after A fails,
but you'll have to recopy the entire table data to the slave, as the
triggers won't be in place yet.

- --
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 200912171153
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAksqYqUACgkQvJuQZxSWSsjZtQCfTwbI3f9W0z+82IU7lL+2LwNK
aUYAnj3AMjRDOeFIuHDee4JJemneArie
=75Ho
-END PGP SIGNATURE-



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] pg_dump and ON DELETE CASCADE problem

2009-12-17 Thread CG


--- On Thu, 12/17/09, Adrian Klaver akla...@comcast.net wrote:

 
 Would it be possible to see the table schemas and indices
 ?
 
  

Sure (you asked for it!!) :

CREATE TABLE packet
(
  id integer NOT NULL DEFAULT nextval('packet_id_seq'::regclass),
  packet_uuid uniqueidentifier NOT NULL DEFAULT newid(),
  username character varying(50) NOT NULL DEFAULT ''::character varying,
  pgroup_uuid uniqueidentifier DEFAULT newid(),
  orig_trans_uuid uniqueidentifier,
  user_reference_id character varying(50) DEFAULT ''::character varying,
  trans_data character varying(100) NOT NULL DEFAULT ''::character varying,
  trans_type character varying(50) NOT NULL DEFAULT 'unknown'::character 
varying,
  trans_date timestamp with time zone DEFAULT ('now'::text)::timestamp(6) with 
time zone,
  processor character varying(10),
  service character varying(10),
  CONSTRAINT packet_pkey PRIMARY KEY (id)
)
WITH (
  OIDS=TRUE
);
ALTER TABLE packet OWNER TO postgres;
GRANT ALL ON TABLE packet TO postgres;
GRANT ALL ON TABLE packet TO adduser;

CREATE INDEX packet_otuuid_idx
  ON packet
  USING btree
  (orig_trans_uuid);

CREATE INDEX packet_pgroup_uuid_idx
  ON packet
  USING btree
  (pgroup_uuid);

CREATE INDEX packet_puuid_hash_uniq
  ON packet
  USING hash
  (packet_uuid);

CREATE UNIQUE INDEX packet_puuid_idx
  ON packet
  USING btree
  (packet_uuid);

CREATE INDEX packet_trans_date_idx
  ON packet
  USING btree
  (trans_date);

CREATE INDEX packet_user_idx
  ON packet
  USING btree
  (username);

CREATE INDEX packet_user_puuid_idx
  ON packet
  USING btree
  (username, packet_uuid);

CREATE OR REPLACE RULE packet_delete_rule AS
ON DELETE TO packet DO  INSERT INTO removed_packet (id, packet_uuid, 
username, pgroup_uuid, orig_trans_uuid, user_reference_id, trans_data, 
trans_type, trans_date, processor, service)  SELECT packet.id, 
packet.packet_uuid, packet.username, packet.pgroup_uuid, 
packet.orig_trans_uuid, packet.user_reference_id, packet.trans_data, 
packet.trans_type, packet.trans_date, packet.processor, packet.service
   FROM packet
  WHERE packet.id = old.id;

CREATE TRIGGER packet_count_delete_trig
  BEFORE DELETE
  ON packet
  FOR EACH ROW
  EXECUTE PROCEDURE letter_count_trig();

CREATE TRIGGER packet_count_insert_trig
  AFTER INSERT
  ON packet
  FOR EACH ROW
  EXECUTE PROCEDURE letter_count_trig();

CREATE TRIGGER packet_delete_trig
  BEFORE DELETE
  ON packet
  FOR EACH ROW
  EXECUTE PROCEDURE packet_datalink_status_trig();

CREATE TRIGGER packet_insert_trig
  AFTER INSERT
  ON packet
  FOR EACH ROW
  EXECUTE PROCEDURE packet_ins_trig();

CREATE TABLE dpo.packet_search_trigram
(
  id integer NOT NULL DEFAULT nextval('packet_search_trigram_id_seq'::regclass),
  packet_uuid uniqueidentifier NOT NULL,
  trigram_vector tsvector NOT NULL,
  CONSTRAINT packet_search_trigram_id_pkey PRIMARY KEY (id),
  CONSTRAINT packet_search_trigram_puuid_fkey FOREIGN KEY (packet_uuid)
  REFERENCES dpo.packet (packet_uuid) MATCH SIMPLE
  ON UPDATE CASCADE ON DELETE CASCADE
)
WITH (
  OIDS=FALSE
);
ALTER TABLE dpo.packet_search_trigram OWNER TO postgres;
GRANT ALL ON TABLE dpo.packet_search_trigram TO postgres WITH GRANT OPTION;
GRANT ALL ON TABLE dpo.packet_search_trigram TO addgroup;

CREATE INDEX packet_search_trigram_packet_uuid_idx
  ON dpo.packet_search_trigram
  USING hash
  (packet_uuid);

CREATE INDEX packet_search_trigram_trigram_vector_idx
  ON dpo.packet_search_trigram
  USING gin
  (trigram_vector);



  

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-17 Thread Scott Marlowe
I would recommend using a traffic shaping router (like the one built
into the linux kernel and controlled by tc / iptables) to simulate a
long distance connection and testing this yourself to see which
replication engine will work best for you.

On Thu, Dec 17, 2009 at 7:35 PM, Gauthier, Dave dave.gauth...@intel.com wrote:
 One concern I have about these trigger based replication systems is that I
 fear it may ping the slave for each and every DML statement separately in
 time and in a transaction.  My slave will literally be 1400 miles away and
 all replication communications will be over the net.  If I have a
 transaction which has 1000 DML statements in it, is this thing going to
 update the slave 1000 times separately over the net ? (I may not live long
 enough to see it finish)  Or will it be smart enough to wait until I
 commit then send over a single bundle of 1000 DML?  The time diff will be
 more than significant.

 Thanks for all the great input on this!

 -Original Message-
 From: pgsql-general-ow...@postgresql.org
 [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Greg Sabino Mullane
 Sent: Thursday, December 17, 2009 11:58 AM
 To: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Justifying a PG over MySQL approach to a project


 -BEGIN PGP SIGNED MESSAGE-
 Hash: RIPEMD160


 How difficult is it to switch the master's hat from one DB instance
 to another?  Let's say the master in a master-slave scenario goes
 down but the slave is fine.  Can I designate the slave as being the
 new master, use it for read/write, and then just call the broken
 master the new slave once it comes back to life (something like that)?

 Sure. Bucardo slaves are not changed at all, so they are already
 read/write and don't need anything special done to unslave them.

 One possible way to handle the scenario is:

 Assuming three servers:
 * A (master) sends changes to B, receives read/write queries
 * B (slave) has transaction_read_only set to true, receives read queries
 * C has the Bucardo database and daemon

 Box A goes down suddenly.

 * Stop Bucardo on box C
 * Flip the boxes around in the bucardo.db table
 * Do a 'bucardo_ctl validate sync all'
   (This will create the needed triggers on B)
 * Set B's transaction_read_only to false
 * Point your apps at B instead of A for read/write queries

 When A comes back up:

 * DROP SCHEMA bucardo CASCADE; (drops all triggers)
 * Set transaction_read_only to true
 * Start Bucardo on C
 * Once caught up, point read-only queries to A

 If you are in a rush, you point things to B immediately after A fails,
 but you'll have to recopy the entire table data to the slave, as the
 triggers won't be in place yet.

 - --
 Greg Sabino Mullane g...@turnstep.com
 End Point Corporation http://www.endpoint.com/
 PGP Key: 0x14964AC8 200912171153
 http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
 -BEGIN PGP SIGNATURE-

 iEYEAREDAAYFAksqYqUACgkQvJuQZxSWSsjZtQCfTwbI3f9W0z+82IU7lL+2LwNK
 aUYAnj3AMjRDOeFIuHDee4JJemneArie
 =75Ho
 -END PGP SIGNATURE-



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
When fascism comes to America, it will be intolerance sold as diversity.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-17 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


 One concern I have about these trigger based replication systems is
 that I fear it may ping the slave for each and every DML statement
 separately in time and in a transaction.  My slave will literally be
 1400 miles away and all replication communications will be over the
 net.  If I have a transaction which has 1000 DML statements in it, is
 this thing going to update the slave 1000 times separately over the
 net ? (I may not live long enough to see it finish)  Or will it be
 smart enough to wait until I commit then send over a single bundle
 of 1000 DML?  The time diff will be more than significant.

Both Slony and Bucardo are transaction based, meaning all 1000 changes
will happen at once, and only when the transaction has committed. In
practice, you'll find that both are surprisingly quick.

With Bucardo, the number of changes may be much less than 1000, as it
is data based, not statement based. Thus, as an extreme example, if
the 1000 statements are all updating a single row, only a single
update (actually a delete/insert) is done on the slave.

- --
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 200912172144
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAksq7RMACgkQvJuQZxSWSsgvNQCg0Og/Ilg7rO9UjBecCTVwf04x
E1cAn05wdiJSh+VN3XpZrsFhNJYUTYjG
=VLf6
-END PGP SIGNATURE-



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-17 Thread David Boreham

Scott Marlowe wrote:

I would recommend using a traffic shaping router (like the one built
into the linux kernel and controlled by tc / iptables) to simulate a
long distance connection and testing this yourself to see which
replication engine will work best for you.
  
Netem  : 
http://www.linuxfoundation.org/collaborate/workgroups/networking/netem
We used this to make a test rig for Directory Server replication, to 
verify a

re-design that added pipelining to the replication protocol.
It's already in the modern Linuxes--just needs to be configured.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-17 Thread Scott Marlowe
On Thu, Dec 17, 2009 at 7:51 PM, David Boreham david_l...@boreham.org wrote:
 Scott Marlowe wrote:

 I would recommend using a traffic shaping router (like the one built
 into the linux kernel and controlled by tc / iptables) to simulate a
 long distance connection and testing this yourself to see which
 replication engine will work best for you.


 Netem  :
 http://www.linuxfoundation.org/collaborate/workgroups/networking/netem
 We used this to make a test rig for Directory Server replication, to verify
 a
 re-design that added pipelining to the replication protocol.
 It's already in the modern Linuxes--just needs to be configured.

Wow, everytime I turn around someone's built something cool from a set
of small sharp tools.  Thanks!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] order of trigger firing relative to column/table constraints

2009-12-17 Thread Daniel Popowich

I am designing a DB where column/table constraints are not sufficient
for data integrity (e.g., guaranteeing non-recursive tree graphs) so
I'm writing my first complicated triggers and I have been searching
docs and archives of this list for detailed information on when
triggers are fired relative to constraints created with CREATE TABLE
and/or ALTER TABLE.


For example, in what phase are CHECK and FOREIGN KEY constraints (as
well as NOT NULL and UNIQUE constraints) checked relative to the order
of triggers firing.  Documentation clearly outlines custom trigger
order as:

  1before-statement

  2before-row
  [before-row ...]

  --data now visible--
  
  3after-row
  [after-row ...]

  4after-statement

For example, it was at first surprising to discover in my before-row
trigger that foreign key constraints had not yet been checked (which
I assumed I could take for granted in my trigger since I had defined
the column constraint).  Which means the foreign key constraint
checking will have to be done twice: once in my custom trigger in the
before-row phase (because my logic requires it there) and again when
the foreign key column constraint is checked (whenever that is).

In summary, I have one general question and two specific questions:

General: is it documented somewhere in any detail the order of
column/table constraint checking relative to custom triggers.

Specific #1: Is there a way to control the order of column/table
constraints relative to custom triggers?

Specific #2: If, say, CHECK and FOREIGN KEY constraints are checked in
the after-row phase (or just before it, but after the before-row
phase), so I need to move my custom constraint logic to the after-row
phase to take advantage of these constraints is the only way for me to
abort the insert or update by raising an error?  (If the my logic is
in the before-row phase, which is an intuitive place to put it, I
could just return null, but that won't work in the after-row phase.)


BTW, if I can get decent info, I'd be willing to volunteer to
contribute well written documentation to give back to the cause.


Thanks,

Dan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Partitioning Advice Request

2009-12-17 Thread Vincenzo Romano
2009/12/18 Scott Marlowe scott.marl...@gmail.com:
 The main reason to avoid rules is that they're much less efficient
 than triggers.  We use partitioning at work for our stats db, and
 partition by day, and we have about 2 years worth of stats data, so
 our insert trigger has about 700 if / elseif / else conditions in it,
 and the trigger is still very very fast.  I tried it with rules before
 and it was way too slow.

I still don't understand the poor efficiency of the rule system. I presume it's
in the implementation, not the concept.

Well, for the sake of efficiency I would like to avoid that if/elseif chain
with an extra support table to look for the right partition.
Unless you have a different advise against dynamic SQL (EXECUTE...)
to define the partition table name.

 Note that the absolute fastest way to insert to the right child table
 is to do it from the application side, choosing the right child table
 there.

I definitely disagree on this advise. For the sake of speed I could
also run some library level (and thus application level) database
solution.
The main objective for using an RDBMS (at least in my view) is to
gather in a single place (the RDBMS) the data storage and
management stuff.
Moving policies to the application level would just scatter that
stuff with faster but more error prone solutions.
And the aplpication needs to know the partitioning policies.

 We automated our trigger creation.  While the trigger is huge in terms
 of number of lines, speed wise the creation of the new trigger each
 night at midnight is measured in milliseconds

Correct.
There is still another point for me to investigate into this partitioning
area. When the partitions get created.
I see from your answers that everyone is expecting the partitions to
be already in place at the moment the trigger or the rule get fired.
In my application you can generate events for the past or the future
where there's no partition yet.

-- 
Vincenzo Romano
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] order of trigger firing relative to column/table constraints

2009-12-17 Thread Craig Ringer

On 18/12/2009 1:18 PM, Daniel Popowich wrote:


For example, it was at first surprising to discover in my before-row
trigger that foreign key constraints had not yet been checked (which
I assumed I could take for granted in my trigger since I had defined
the column constraint).


Nope. CHECK constraints and NOT NULL constraints aren't tested either. 
After all, you might want to have the client leave those fields null (or 
even force them to leave them null/default using column permissions) and 
then populate them from your trigger.


Examples include `inserted by user', `last modified by user' etc 
columns, where you don't want the user to have the ability to set or 
alter them, so you might GRANT them permission to modify all columns 
except those ones, and set those columns from a trigger.



 Which means the foreign key constraint
checking will have to be done twice: once in my custom trigger in the
before-row phase (because my logic requires it there) and again when
the foreign key column constraint is checked (whenever that is).


... unless you can move your logic to an AFTER trigger. You can still 
roll back the change by throwing an exception.


AFTER is a better place to do this sort of thing anyway, really. Your 
BEFORE triggers might modify the data in ways that change the results of 
your checks, so they're quite dependent on trigger firing order. Much 
safer to put it in AFTER, though you do incur the overhead of doing the 
work and rolling it back that way.



In summary, I have one general question and two specific questions:

General: is it documented somewhere in any detail the order of
column/table constraint checking relative to custom triggers.


IIRC, fkey checks are just AFTER ... FOR EACH ROW triggers. I *think* 
they're just fired in alphabetical order along with the other triggers, 
but I'm not sure what name they have - if it's the fkey constraint name, 
or something else.



abort the insert or update by raising an error?


Sure. RAISE an exception. I think that's basically how the fkey checks 
do it.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general