Re: [GENERAL] How to use read uncommitted transaction level and set update order

2009-12-20 Thread Andrus
You cannot access new values of a  particular row within a single UPDATE 
statement, but you do see new  values done in the same transaction.

This is explain in some detail in the documentation:

http://www.postgresql.org/docs/8.4/interactive/transaction-iso.html#XACT-READ-COMMITTED


I tried

drop table if exists tt ;
create temp table tt ( a int, b int );
insert into tt values ( 1,2);
insert into tt values ( 3,4);
update tt set a=a*10, b=(select sum(a) from tt);
select * from tt

b has value 4 for every row.

So we *dont* see new  values done in the same transaction.
How to fix ?

Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to use read uncommitted transaction level and set update order

2009-12-20 Thread Scott Marlowe
On Sun, Dec 20, 2009 at 2:12 AM, Andrus kobrule...@hot.ee wrote:
 You cannot access new values of a  particular row within a single UPDATE
 statement, but you do see new  values done in the same transaction.
 This is explain in some detail in the documentation:


 http://www.postgresql.org/docs/8.4/interactive/transaction-iso.html#XACT-READ-COMMITTED

 I tried

 drop table if exists tt ;
 create temp table tt ( a int, b int );
 insert into tt values ( 1,2);
 insert into tt values ( 3,4);
 update tt set a=a*10, b=(select sum(a) from tt);
 select * from tt

 b has value 4 for every row.

 So we *dont* see new  values done in the same transaction.
 How to fix ?

This isn't broken behaviour.

First the inserts run and we have

1,2
3,4

When the update fires, the right hand side of the key/value pairs are
evaluated simultaneously based on the data in the table AT THE TIME
The query starts.  b=sum(a) means b=sum(3,1) which means you're
setting b=4...  This was explained in a previous post by Tom I
believe.  Unless I'm missing what you're saying.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extracting SQL from logs in a usable format

2009-12-20 Thread JGuillaume (ioguix) de Rorthais
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hey guys,

Albe Laurenz a écrit :
 Chris Ernst wrote:
 I have a project where I need to be able to capture every query
 from a production system into a file such that I can replay
 them on a staging system.  Does such a thing already exist or
 should I start writing my own log parser?

I have the same problem. Our goal would be to capture from a 8.1 a
representative period and replay against a 8.4 to find out every
possible issues.

 I am currently developing such a beast, it is currently still quite
 alpha. If you are interested I can send you a copy. I'll try to
 publish it once it is sufficiently beta.

Interesting project, but but I have one big issue under 8.1 and
advanced query (prepare / bind / execute): we cannot extract values of
parameters from the logs with 8.2. So I am not able to parse /
rebuilt query from logs under 8.1.

I started something as well, based on tcpdump/tshark output (tshark
- -VT text ...). My project is in pre-alpha step, but at least I can
extract both simple queries and advanced queries w/ params. The only
known limitations with this approach are :
 - cannot extract from SSL connections
 - ISTM tshark only support PostgreSQL V3 protocol. So only work on
 7.2  IIRC

 Yours, Laurenz Albe

- --
Jehan-Guillaume (ioguix) de Rorthais
www.dalibo.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkst9/EACgkQxWGfaAgowiJPNgCgia285amuwCXX2nl4/LaNSofR
N1wAnA8kuFKnP0vzAx/PCamheD/iKmNu
=ihow
-END PGP SIGNATURE-


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extracting SQL from logs in a usable format

2009-12-20 Thread JGuillaume (ioguix) de Rorthais
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Shoaib Mir a écrit :
 On Sun, Dec 20, 2009 at 9:37 AM, Albe Laurenz
 laurenz.a...@wien.gv.at mailto:laurenz.a...@wien.gv.at wrote:

 Chris Ernst wrote:
 I have a project where I need to be able to capture every
 query from a
 production system into a file such that I can replay them on
 a staging
 system.  Does such a thing already exist or should I start
 writing my
 own log parser?


 I am not sure if its still available but there used to a nice tool
 for doing the same, I guess it was named as Playr by myyearbook.

AFAIK, it's a dead project and its home at myyearbook is unavailable.
However, you can still access their download page:
https://area51.myyearbook.com/downloads/

I did find the svn repo with some googling some time ago...

I did some quick investigation on it, it seems it doesn't work with
extended queries neither.

 Regards, -- Shoaib Mir http://shoaibmir.wordpress.com/

- --
Jehan-Guillaume (ioguix) de Rorthais
www.dalibo.com

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkst+d8ACgkQxWGfaAgowiKA8gCcDhiCPC6pZCghVIuVePd0s3lo
GtMAoKFV5YldPH8QjdYGMRZ+Mq0Io/Dk
=+ANY
-END PGP SIGNATURE-


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extracting SQL from logs in a usable format

2009-12-20 Thread hubert depesz lubaczewski
On Fri, Dec 18, 2009 at 02:08:07PM -0700, Chris Ernst wrote:
 Hi all,
 
 I have a project where I need to be able to capture every query from a
 production system into a file such that I can replay them on a staging
 system.  Does such a thing already exist or should I start writing my
 own log parser?

log to CSV format.

But:
1. not always all parts of the query will be logged in query itself
(prepared statements)
2. replying queries on 2nd machine doesn't quarantee that you will get
the same data afterwards.

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extracting SQL from logs in a usable format

2009-12-20 Thread Craig Ringer

On 20/12/2009 7:59 PM, hubert depesz lubaczewski wrote:

On Fri, Dec 18, 2009 at 02:08:07PM -0700, Chris Ernst wrote:

Hi all,

I have a project where I need to be able to capture every query from a
production system into a file such that I can replay them on a staging
system.  Does such a thing already exist or should I start writing my
own log parser?


log to CSV format.

But:
1. not always all parts of the query will be logged in query itself
(prepared statements)
2. replying queries on 2nd machine doesn't quarantee that you will get
the same data afterwards.


... because of global settings (DATESTYLE etc) that may affect 
interpretation of the data, and because the log order of statements 
can't accurately represent concurrent execution.


With the same server settings, the same starting values for sequences 
etc, no time-based function use, no non-deterministic function use (eg: 
random()) and no non-deterministic interactions between concurrent 
transactions, you should be able to get data that's the same when 
examined at the SQL level. It might not be in the same order, though, 
and it certainly won't be the same on-disk.


So ... why do you need this replay? What sorts of limitations can you 
live with?


It sounds like concurrency is a concern, and that's one that will give 
you pain, because the Pg logs don't record statement start _and_ end 
time, nor do they record at what points along the execution timeline the 
backend got a chance to do work. So it's hard to know about lock 
acquisition order, among other things.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Transaction started test

2009-12-20 Thread Larry Anderson

Hi All,

I'm new to Postgres and need a way to test if a transaction is already 
in progress.


The test will take place inside a trigger function in pl/pgsql and will 
start a new transaction only if one is not in progress ie started by a 
previous trigger that cascaded through to this trigger. Cannot find any 
such function in the docs.


Any help much appreciated.

Best regards


Larry Anderson



Re: [GENERAL] Transaction started test

2009-12-20 Thread Craig Ringer

On 20/12/2009 9:02 PM, Larry Anderson wrote:

Hi All,

I'm new to Postgres and need a way to test if a transaction is already
in progress.

The test will take place inside a trigger function in pl/pgsql and will
start a new transaction only if one is not in progress


You can't do that, I'm afraid.

A PL/PgSQL function cannot be called without already being in a 
transaction. Absolutely every regular SQL statement in PostgreSQL runs 
in a transction. If there isn't already an open transaction, the 
top-level statement will start one.


So:

SELECT fred();

outside a transaction is equivalent to:

BEGIN;
SELECT fred();
COMMIT;

Note that PostgreSQL supports functions, but not true stored procedures 
that can manipulate transactions. A Pl/PgSQL function can't commit or 
roll back a transaction. PostgreSQL has no support for autonomous 
transactions either, so you can't start a new separate transaction 
inside a function and commit that whether the surrounding transaction 
commits or rolls back.


What it *does* have is subtransactions. If you need nested transactions, 
you can use subtransactions to get the same effect.



ie started by a
previous trigger that cascaded through to this trigger. Cannot find any
such function in the docs.


In either case, the statement that caused the trigger to be invoked will 
have started a transaction if one was not already in progress. So you 
are _always_ in a transaction.


(Hmm... I think this needs to be in the FAQ. Added to my TODO.).

--
Craig Ringer


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-20 Thread Lincoln Yeoh

At 05:44 AM 12/17/2009, Greg Smith wrote:
You've probably already found 
http://wiki.postgresql.org/wiki/Why_PostgreSQL_Instead_of_MySQL:_Comparing_Reliability_and_Speed_in_2007 
which was my long treatment of this topic (and overdue for an update).


The main thing I intended to put into such an update when I get to 
it is talking about the really deplorable bug handling situation for 
MySQL, which is part of how all the data corruption issues show 
up.  There's a good overview of its general weirdness at 
http://www.xaprb.com/blog/2007/08/12/what-would-make-me-buy-mysql-enterprise/ 
and the following series of pages lead you through my favorite set of bugs:


More so when Monty himself grumbles about the bug handling situation:

http://monty-says.blogspot.com/2008/11/oops-we-did-it-again-mysql-51-released.html

If people still insist on MySQL, you might want to get it in writing 
that it's someone else's decision to use MySQL and not yours ;).


Ten or so years ago MySQL was better than Postgres95, and it would 
have been easy to justify using MySQL over Postgres95 (which was 
really slow and had a fair number of bugs). But Postgresql is much 
better than MySQL now. That's just my opinion of course.


Link



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extracting SQL from logs in a usable format

2009-12-20 Thread Albe Laurenz
JGuillaume (ioguix) de Rorthais wrote:
 I am currently developing such a beast, it is currently still quite
 alpha. If you are interested I can send you a copy. I'll try to
 publish it once it is sufficiently beta.

 Interesting project, but but I have one big issue under 8.1 and
 advanced query (prepare / bind / execute): we cannot extract values of
 parameters from the logs with 8.2. So I am not able to parse /
 rebuilt query from logs under 8.1.

Hmm, that doesn't bother me. 8.1 is pretty old now, and who knows when
my program will be stable :^)
 
 I started something as well, based on tcpdump/tshark output (tshark
 - -VT text ...). My project is in pre-alpha step, but at least I can
 extract both simple queries and advanced queries w/ params. The only
 known limitations with this approach are :
  - cannot extract from SSL connections
  - ISTM tshark only support PostgreSQL V3 protocol. So only work on
  7.2  IIRC

I guess each approach has some limitations.
The limitations I encountered for log parsing:
- COPY data are not logged.
- Fast Path API calls are not logged (that includes large object functions).
- Unless you have log_min_messages at DEBUG2 or better, you cannot
  determine when exactly a prepared statement was parsed.
 
Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transaction started test

2009-12-20 Thread Larry Anderson

Hi Craig,

Many thanks for the detailed and quick reply. Must admit although I'd 
read that every statement was implicitly in a transaction I hadn't 
connected that through to the operations in any associated triggers.


Best regards

Larry Anderson

Craig Ringer wrote:

On 20/12/2009 9:02 PM, Larry Anderson wrote:

Hi All,

I'm new to Postgres and need a way to test if a transaction is already
in progress.

The test will take place inside a trigger function in pl/pgsql and will
start a new transaction only if one is not in progress


You can't do that, I'm afraid.

A PL/PgSQL function cannot be called without already being in a 
transaction. Absolutely every regular SQL statement in PostgreSQL runs 
in a transction. If there isn't already an open transaction, the 
top-level statement will start one.


So:

SELECT fred();

outside a transaction is equivalent to:

BEGIN;
SELECT fred();
COMMIT;

Note that PostgreSQL supports functions, but not true stored 
procedures that can manipulate transactions. A Pl/PgSQL function can't 
commit or roll back a transaction. PostgreSQL has no support for 
autonomous transactions either, so you can't start a new separate 
transaction inside a function and commit that whether the surrounding 
transaction commits or rolls back.


What it *does* have is subtransactions. If you need nested 
transactions, you can use subtransactions to get the same effect.



ie started by a
previous trigger that cascaded through to this trigger. Cannot find any
such function in the docs.


In either case, the statement that caused the trigger to be invoked 
will have started a transaction if one was not already in progress. So 
you are _always_ in a transaction.


(Hmm... I think this needs to be in the FAQ. Added to my TODO.).

--
Craig Ringer






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to use read uncommitted transaction level and set update order

2009-12-20 Thread Albe Laurenz
You are confusing a few things, and you don't want to hear the
explanations because they are inconvenient.
 
Andrus wrote:
 1. In my case b expression needs values from previous rows updated in this
 same command before:

You are confusing to the left of and before.
If you want behaviour that deviates from the SQL standard, you will
usually meet fierce resistance from PostgreSQL.

 I understood from replies that

 set transaction isolation level read uncommitted;

 in PostgreSql is broken: it sets silently committed isolation level.

You should read this:
http://www.postgresql.org/docs/8.4/static/transaction-iso.html
 
I agree that the behaviour may be surprising, but broken is polemic.
This is not always a good comparison when standard behaviour is
concerned, but Oracle does it the same way.

 2. In my planned UPDATE statement instead of 4 there is an expression
 containing one big CASE WHEN expression with many WHEN .. THEN  clauses.
 This command  takes several hundreds of lines.
 Your solution requires repeating this expression two times and thus makes
 sql difficult to read.

... plus the expression would be evaluated twice. But you cannot hold that 
against
the person who gave you the advice, because you hid that fact.
 
Why don't you let your imagination play a little:
 
1) You could use a subquery like
  UPDATE foo SET col = myex
  FROM (SELECT foo_id, your 100 lines here AS myex FROM whatever ...) AS bar
  WHERE foo.foo_id = bar.foo_id;
2) You could define a stable SQL function for your 100 line subquery which
  should be evaluated only once in the UPDAT query.
 
Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to use read uncommitted transaction level and set update order

2009-12-20 Thread Christophe Pettus

Hi, Andrus,

First, it does seem that you are expecting PostgreSQL to have the same  
behavior as a flat-file manager such as FoxPro (indeed, it seems you'd  
like PG to have the behavior of a *specific* flat-file manager).   
Despite the superficial similarity in the command syntax, a modern  
RDBMS is a very different animal from FoxPro, dBase, 4D and the like,  
and needs to be approached on its own terms rather than expecting the  
semantics of commands with the same keyword to be the same.  While  
that may seem to be an irritating and pointless transition, modern  
RDBMSes are so much more powerful than flat-file managers that you'll  
find the transition well worth your time.


On Dec 20, 2009, at 1:12 AM, Andrus wrote:


I tried

drop table if exists tt ;
create temp table tt ( a int, b int );
insert into tt values ( 1,2);
insert into tt values ( 3,4);
update tt set a=a*10, b=(select sum(a) from tt);
select * from tt

b has value 4 for every row.

So we *dont* see new  values done in the same transaction.


You seem to have a specific model for execution in mind, and that  
model is not the one PostgreSQL (or any other standards-compliant SQL  
database) will use.  Within each UPDATE statement, the UPDATE is  
operating on a snapshot of the database at the time the command begins  
execution.  That's what the SQL standard requires, as Tom Lane noted  
earlier.


If you want to iterate through each row, applying changes, using PL/ 
pgSQL with cursors is probably the best solution:


http://www.postgresql.org/docs/8.4/interactive/plpgsql-cursors.html

If you can be a bit more detailed about what you are trying to  
accomplish, we can help you more.

--
-- Christophe Pettus
   x...@thebuild.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extracting SQL from logs in a usable format

2009-12-20 Thread Dimitri Fontaine
Le 19 déc. 2009 à 16:20, Chris Ernst a écrit :

 Hmm.. That does look very interesting.  The only thing that concerns me
 is where it says it supports Basic Queries (Extended queries not yet
 supported).  I'm not sure what is meant by Extended queries.  Any idea?

I think it refers to the Extended Query support in the frontend / backend 
protocol, as in the documentation:
  
http://www.postgresql.org/docs/8.4/static/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY

  The extended query protocol breaks down the above-described simple query 
protocol into multiple steps. The results of preparatory steps can be re-used 
multiple times for improved efficiency. Furthermore, additional features are 
available, such as the possibility of supplying data values as separate 
parameters instead of having to insert them directly into a query string

So that's for parse/bind/execute communications, which are used in 
prepare/execute and queryParam I think.
-- 
dim
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-20 Thread Merlin Moncure
On Sun, Dec 20, 2009 at 10:04 AM, Lincoln Yeoh ly...@pop.jaring.my wrote:
 At 05:44 AM 12/17/2009, Greg Smith wrote:

 You've probably already found
 http://wiki.postgresql.org/wiki/Why_PostgreSQL_Instead_of_MySQL:_Comparing_Reliability_and_Speed_in_2007
 which was my long treatment of this topic (and overdue for an update).

 The main thing I intended to put into such an update when I get to it is
 talking about the really deplorable bug handling situation for MySQL, which
 is part of how all the data corruption issues show up.  There's a good
 overview of its general weirdness at
 http://www.xaprb.com/blog/2007/08/12/what-would-make-me-buy-mysql-enterprise/
 and the following series of pages lead you through my favorite set of bugs:

 More so when Monty himself grumbles about the bug handling situation:

 http://monty-says.blogspot.com/2008/11/oops-we-did-it-again-mysql-51-released.html

 If people still insist on MySQL, you might want to get it in writing that
 it's someone else's decision to use MySQL and not yours ;).

 Ten or so years ago MySQL was better than Postgres95, and it would have been
 easy to justify using MySQL over Postgres95 (which was really slow and had a
 fair number of bugs). But Postgresql is much better than MySQL now. That's
 just my opinion of course.

I don't think anybody is going to dispute that here.  IMO, Postgres is
just completely in an another league on technical terms.   From a
business point of view, the BSD license is great but I can understand
being nervous about availability and price of postgresql talent.  In
the long run though, you are much better off with one of us!

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-20 Thread Ron Mayer
Lincoln Yeoh wrote:
 Ten or so years ago MySQL was better than Postgres95, and it would have
 been easy to justify using MySQL over Postgres95 (which was really slow
 and had a fair number of bugs). But Postgresql is much better than MySQL
 now. That's just my opinion of course.

Really?!?

MySQL development started in '94; and their first internal release was May 
95.[1]

At that time Postgres's SQL language support was new, but didn't the underlying
database already have a half decade of history that surely was more mature
than MySQL at the time?

I thought the main justification for MySQL back then is that they had
better Win95 support (and a quality control philosophy that more matched
the old pre-NT windows that favored time-to-market over correctness).

[1] http://en.wikipedia.org/wiki/MySQL#cite_note-21


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-20 Thread Merlin Moncure
On Sun, Dec 20, 2009 at 3:19 PM, Ron Mayer
rm...@cheapcomplexdevices.com wrote:
 Lincoln Yeoh wrote:
 Ten or so years ago MySQL was better than Postgres95, and it would have
 been easy to justify using MySQL over Postgres95 (which was really slow
 and had a fair number of bugs). But Postgresql is much better than MySQL
 now. That's just my opinion of course.

 Really?!?

 MySQL development started in '94; and their first internal release was May 
 95.[1]

 At that time Postgres's SQL language support was new, but didn't the 
 underlying
 database already have a half decade of history that surely was more mature
 than MySQL at the time?

For a long time, postgres had a lot of issues that made it less
suitable for high web environments, especially 24x7 high load.  vacuum
was a nightmare, transaction overhead was very high, and the complex
disk format made upgrades a real pain (only this last issue remains).
The postgresql optimizer has always had an edge, but it wasn't so cut
and dry back then.  It was only with the 8.x versions that postgres
really started pulling away.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-20 Thread Tom Lane
Ron Mayer rm...@cheapcomplexdevices.com writes:
 Lincoln Yeoh wrote:
 Ten or so years ago MySQL was better than Postgres95, and it would have
 been easy to justify using MySQL over Postgres95 (which was really slow
 and had a fair number of bugs). But Postgresql is much better than MySQL
 now. That's just my opinion of course.

 Really?!?

 MySQL development started in '94; and their first internal release was May 
 95.[1]

 At that time Postgres's SQL language support was new, but didn't the 
 underlying
 database already have a half decade of history that surely was more mature
 than MySQL at the time?

What the underlying database had was a decade or so of development and use
for strictly academic purposes.  This anecdote might help:
http://archives.postgresql.org/pgsql-hackers/2002-06/msg00085.php

It was not until the current community started working on it, circa
1997, that there was any real emphasis on making it stable enough for
production use.  And I would say that we didn't get to the point of
being really production-worthy until 2001 or so, by which time the
Postgres sucks meme was already pretty widely established.  And
so was MySQL.  We've been playing catchup in the public-perception
department ever since.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-20 Thread Gauthier, Dave
The arguments against PG are not technical.  The biggest advocate for MySQL is 
actually a very sharp engineer who admits that PG is a superior DB.  But MySQL 
is more popular in the corp and has more formal recognition.  So he's saying 
that the differences aren't big enoug to justify using PG.

A statement from an uninterested third party stating that data in a PG DB is 
more secure, more integrit, more up-time than a MySQL implementation, now THAT 
would carry weight with the decision maker (who know nothing about DBs, but 
want that data secure, integrit and available).

Not sure when the decision will be made.  But I expect to get physically ill if 
I have to dismantle the PG implementation and replace it with MySQL.  

Dear Santa, All I want for Christmas is to be able to keep my DB.




-Original Message-
From: Lincoln Yeoh [mailto:ly...@pop.jaring.my] 
Sent: Sunday, December 20, 2009 10:05 AM
To: Greg Smith; Gauthier, Dave
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Justifying a PG over MySQL approach to a project

At 05:44 AM 12/17/2009, Greg Smith wrote:
You've probably already found 
http://wiki.postgresql.org/wiki/Why_PostgreSQL_Instead_of_MySQL:_Comparing_Reliability_and_Speed_in_2007
 
which was my long treatment of this topic (and overdue for an update).

The main thing I intended to put into such an update when I get to 
it is talking about the really deplorable bug handling situation for 
MySQL, which is part of how all the data corruption issues show 
up.  There's a good overview of its general weirdness at 
http://www.xaprb.com/blog/2007/08/12/what-would-make-me-buy-mysql-enterprise/ 
and the following series of pages lead you through my favorite set of bugs:

More so when Monty himself grumbles about the bug handling situation:

http://monty-says.blogspot.com/2008/11/oops-we-did-it-again-mysql-51-released.html

If people still insist on MySQL, you might want to get it in writing 
that it's someone else's decision to use MySQL and not yours ;).

Ten or so years ago MySQL was better than Postgres95, and it would 
have been easy to justify using MySQL over Postgres95 (which was 
really slow and had a fair number of bugs). But Postgresql is much 
better than MySQL now. That's just my opinion of course.

Link



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] alter table performance

2009-12-20 Thread Antonio Goméz Soto

Op 19-12-09 22:20, Jaime Casanova schreef:

are you planning to run this many times? what is wrong with making
this manually?
doesn't seem like something to make automatic...

but if you insist in plpgsql you can execute select version() into
some_text_var and act acordingly

   
No, this is done in an automatic software update procedure across 
hundreds of machines

which run different postgreSQL versions.

Thanks, I'll give this a try.

Antonio.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Something like Oracle Forms, but Open Source to use with PostgreSQL?

2009-12-20 Thread Andre Lopes
Hi,

I need to know if there is something like Oracle Forms in the Open Source
world that works with PostgreSQL.

If do you know something, please let me know.

Best Regards,
André.


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-20 Thread Ron Mayer
Gauthier, Dave wrote:
 The arguments against PG are not technical. 

A few more points that I didn't see in this thread yet that might help
answer the non-technical questions:

* There seem to be more commercial vendors providing support
  for Postgres than MySQL - because most mysql support came
  from that one company.
  http://www.postgresql.org/support/professional_support

* There are bigger companies supporting Postgres than mysql.
  And yes, that'll still be true even if Oracle supports MySQL.
  http://postgresql.fastware.com/

* There are a number of extremely scalable commercial solutions
  based on postgres forks (greenplum, enterprisedb, aster, whatever
  yahoo uses, etc).  These run many of the largest databases
  in the world. If you expect your app to grow to that scale; it
  might make your migration easier.

* There are specialty commercial companies that support
  specific postgres features very well - such as Refractions
  specialized http://www.refractions.net/ which provide
  great postgis support.

* There are enough large companies that depend entirely
  on each of the databases that make either one a save
  choice from that point of view (Skype).   And the way
  Apple and Cisco use it for a number of their programs
  (google cisco postgresql or apple final cut postgreesql
  for links) are other nice datapoints of companies most
  managers would have heard of.



 Dear Santa, All I want for Christmas is to be able to keep my DB.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Something like Oracle Forms, but Open Source to use with PostgreSQL?

2009-12-20 Thread John R Pierce

Andre Lopes wrote:

Hi,

I need to know if there is something like Oracle Forms in the Open 
Source world that works with PostgreSQL.


If do you know something, please let me know.


perhaps OpenOffice Data could do what you need.  I'm not real familiar 
with Oracle Forms, but I know OOo Data can connect to most any database 
including postgres, and it can be used to build 'forms' based database 
applications with queries and reports...its somewhat akin to 
Microsoft Access in basic functionality.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Something like Oracle Forms, but Open Source to use with PostgreSQL?

2009-12-20 Thread Guy Rouillier

On 12/20/2009 4:13 PM, Andre Lopes wrote:

Hi,

I need to know if there is something like Oracle Forms in the Open
Source world that works with PostgreSQL.

If do you know something, please let me know.



Some quick Googling found this if you are looking for a desktop solution:

http://groups.fsf.org/wiki/Oracle_Forms

or this if you are looking for a web solution:

http://stackoverflow.com/questions/179849/best-solution-for-migration-from-oracle-forms-6i-to-the-web

--
Guy Rouillier

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/Perl Performance Problems

2009-12-20 Thread Alex -

Tom, Scott, Alvaro,thanks for the hints on this issue. It looks as if one of 
the EOD maintenance jobs which does a few extensive queries does push data out 
of memory leading to this behavior. 
Is there a way to permanently cash some tables into memory?
ThanksAlex

 Date: Sat, 19 Dec 2009 00:10:36 -0700
 Subject: Re: [GENERAL] PL/Perl Performance Problems
 From: scott.marl...@gmail.com
 To: ainto...@hotmail.com
 CC: t...@sss.pgh.pa.us; pgsql-general@postgresql.org
 
 According to your original post, you do selects in step 1 and 2...  Or
 is this a different job and I've lost the thread (happens to me plenty
 :) )
 
 1. Selects about 20 Records from Table A (
- loops though the list and deletes in total about 50k records in Table B
 2. For each record form Table A it then selects Records from Table C
- loops through these records about 50K in total
- for each runs a query 3 Tables, 10-20M records
- inserts a record in Table B .. about 50K
 3. Returns some stats on the whole operation (100 records).
 
 On Sat, Dec 19, 2009 at 12:07 AM, Alex - ainto...@hotmail.com wrote:
  On a 2nd thought... where does the cach come into play when i only do
  inserts and no selects.
  Alex
 
  Date: Fri, 18 Dec 2009 23:45:07 -0700
  Subject: Re: [GENERAL] PL/Perl Performance Problems
  From: scott.marl...@gmail.com
  To: ainto...@hotmail.com
  CC: t...@sss.pgh.pa.us; pgsql-general@postgresql.org
 
  On Fri, Dec 18, 2009 at 11:37 PM, Alex - ainto...@hotmail.com wrote:
   Hmm...
   how can that be. This is happening every day, so its not a one off or
   happens once in the morning then in the afternoon. There is also no
   other
   task running on the system, its dedicated to postgres.
   Could the Autovacuum cause problems? Starting to invoke Analyze at the
   beginning of the day but the keep silent till the day timestamp breaks ?
   The think is that I have 4 servers setup in a similar way and all have
   exactly the same problem.
 
  What cron jobs are on that machine that run at night? Note that on
  many OSes, maintenance crons are scheduled in a dir something like
  /etc/cron.daily etc... On my laptop they all run at midnight. I'm
  wondering if they're blowing out your cache so that you just don't
  have the same performance the first time you hit a particular dataset
  after they've run. Just a guess. You could try disabling them for a
  day and see what happens.
 
  --
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general
 
  
  Meet singles at ninemsn dating Looking for a great date?
 
 
 
 -- 
 When fascism comes to America, it will be intolerance sold as diversity.
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
  
_
Looking for a great date? Meet singles at ninemsn dating
http://clk.atdmt.com/NMN/go/150855801/direct/01/

Re: [GENERAL] PL/Perl Performance Problems

2009-12-20 Thread Tom Lane
Alex - ainto...@hotmail.com writes:
 Tom, Scott, Alvaro,thanks for the hints on this issue. It looks as if one of 
 the EOD maintenance jobs which does a few extensive queries does push data 
 out of memory leading to this behavior. 
 Is there a way to permanently cash some tables into memory?

Not as such, and if there were it probably wouldn't be an overall
performance win anyway, because you'd hurt your maintenance tasks.
What you might consider doing is, at the end of the EOD sequence,
run some dummy queries that scan the tables you use normally, causing
them to get swapped back in so the cache is already primed when people
come to work in the morning.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extended Query, flush or sync ?

2009-12-20 Thread John DeSoi
Hi Raimon,

On Dec 20, 2009, at 2:11 PM, Raimon Fernandez wrote:

 
 I'm not seeing my e-mails on the PostgreSQL General List ...
 
 ??

Yes, my last message did not make it to the list yesterday (you obviously 
received it). I double checked and it was cc to the list.


 I can pack all of them and send them at the same time, except de Parse, that 
 will go at the connection beggining in my case.

I have two routines, prepare and exec_prepare.

To prepare a named statement for multiple uses, I use prepare (parse, describe, 
sync).

exec_prepare can take a statement from prepare OR you can pass it the unparsed 
SQL instead (along with the parameters). In the second case it performs the 
parse first with the unnamed prepared statement (empty string) and then 
executes it. This is nice because if you don't need multiple executions, you 
can build and execute with a single network write and read. You get the safety 
of parameterized execution and you don't have a prepared statement to dispose 
of in another operation.


John DeSoi, Ph.D.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-20 Thread Greg Smith

Ron Mayer wrote:

* There are enough large companies that depend entirely
  on each of the databases that make either one a save
  choice from that point of view (Skype).   And the way
  Apple and Cisco use it for a number of their programs
  
Yeah, these are all good examples.  Cisco uses PostgreSQL in a number of 
products:


Carrier-Sensitive Routing:  
http://www.cisco.com/en/US/products/sw/voicesw/ps4371/products_user_guide_chapter09186a00800c252c.html


Fabric Manager:  
http://www.cisco.com/en/.../product_data_sheet09186a00800c4656.pdf


That have non-trivial uptime requirements.  Call routing is not a 
field particularly tolerant of the my database got corrupted and went 
down kind of errors.


You'll similarly find PostgreSQL used inside Japan's Nippon Telegraph 
and Telephone Corporation (NTT) too, enough so that they're doing major 
development to improve it (they're sponsoring the Streaming 
Replication feature targeted for 8.5).  When the telcos and providers 
of telco equipment like Skype, Cisco, and NTT are all using PostgreSQL, 
it certainly makes it easy to support the idea that the database is 
reliable in the real world.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-20 Thread Greg Smith

Merlin Moncure wrote:

It was only with the 8.x versions that postgres
really started pulling away.
  

Today I was re-reading a great reminder of just how true this is:

http://suckit.blog.hu/2009/09/29/postgresql_history

From the baseline provided by 8.0, PostgreSQL increased in speed by 
about 8X on both read and writes sides between 8.1 and 8.3.  Since 8.1 
came out in late 2005, it's no wonder the PostgreSQL is slow meme got 
so deep into people's memories--until only four years ago, it was still 
actually true.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general