Re: [GENERAL] losing schema name in pg_dump

2012-05-20 Thread c k
I have two tables software.orders and software.products.
I created two views.

CREATE OR REPLACE VIEW software.o1 AS
 SELECT orders.orderid, orders.productid, orders.amount
   FROM software.orders;

CREATE OR REPLACE VIEW software.o2 AS
 SELECT o1.orderid, o1.productid, o1.amount, products.productname
   FROM software.o1
   JOIN software.products ON o1.productid = products.productid;

Now I take backup for plain type to generate a sql file.

In that file pg_dump shows the view definitions as follows.

CREATE VIEW o1 AS
SELECT orders.orderid, orders.productid, orders.amount FROM orders;


CREATE VIEW o2 AS
SELECT o1.orderid, o1.productid, o1.amount, products.productname FROM
(o1 JOIN products ON ((o1.productid = products.productid)));

If I changed view o2 like this.

CREATE OR REPLACE VIEW software.o2 AS
 SELECT o1.orderid, o1.productid, o1.amount, products.productname
   FROM software.o1
   JOIN core.products ON o1.productid = products.productid;

and again generated sql script from pg_dump then it still removes the
schema name occurrences  of the same schema only where the object resides,
from the view definition.

CREATE VIEW o2 AS
SELECT o1.orderid, o1.productid, o1.amount, products.productname FROM
(o1 JOIN core.products ON ((o1.productid = products.productid)));

It keeps the schema name 'core' as it is but removes 'software'.
 So it makes difficult porting the database to other RDBMS systems and also
to manage updates to customer installations. Having few hundred views and
few hundred functions makes it more difficult. So it will be better not to
remove any schema name when it is explicitly defined.

Regards,

C P  Kulkarni


Re: [GENERAL] timestamps, formatting, and internals

2012-05-20 Thread Tom Lane
David Salisbury  writes:
> Actually, figured I'd post the whole function, painful as it
> might be for anyone to read.  If anyone sees something that's a bit
> of a risk ( like perhaps the whole thing ;)

Well, I don't know exactly what's causing your issue, but I see a few
things that seem rather dubious:

>  min(
>abs(
>  cast(
>extract(
>  epoch FROM (
>measured_at -
>  calculate_local_solar_noon(measured_at,longitude)
>  )
>) as integer
>  )
>)
>  ) as
>  minimum_time_between_measured_and_solarnoon,

Is there a really good reason to force the interval value to integer
here?  I forget offhand whether you get truncation or rounding when you
do that, but in either case it's entirely likely that the computed min()
will be less than the actual difference for *any* specific real site, if
the smallest such difference has a fractional part.  I'd lose the CAST
step and see what happens.

>(
>  (
>sd.measured_at = (
>  calculate_local_solar_noon(sd.measured_at,sds.longitude) + (
>sds.minimum_time_between_measured_and_solarnoon::text ||
>  ' secs'
>  )::interval
>)
>  )
>  or
>  (
>sd.measured_at = (
>  calculate_local_solar_noon(sd.measured_at,sds.longitude) - (
>sds.minimum_time_between_measured_and_solarnoon::text ||
>  ' secs'
>  )::interval
>)
>  )
>)

Because of the CAST above, these tests are guaranteed to fail if the
measured_at value has a fractional-second part, and I'm not sure why you
are assuming that that should be zero.  Also, the above is an expensive,
grotty, imprecise way to convert a number back to an interval.  Consider
using

sds.minimum_time_between_measured_and_solarnoon * interval '1 second'

or even better, what about

abs (extract (epoch from (
   sd.measured_at -
   calculate_local_solar_noon(sd.measured_at,sds.longitude <=
sds.minimum_time_between_measured_and_solarnoon

which seems to me to be a lot more naturally related to what you're
doing to compute minimum_time_between_measured_and_solarnoon in the
first place.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Libpq question

2012-05-20 Thread Chris Angelico
On Mon, May 21, 2012 at 6:12 AM, John R Pierce  wrote:
> On 05/20/12 12:52 PM, John Townsend wrote:
>>
>> By by-passing the "dll" (or "so" on Linux) library I mean you write
>> function or procedure calls to the server that is running as a service on
>> Windows. You don't use the library with its 160 exported functions. You
>> connect directly to the server thus saving one layer of protocols.  To do
>> this, you have to translate all the c functions you need (not just the
>> headers or ".h" files) into pascal. Not a trivial task!
>
> the database service is a completely separate collection of processes.   you
> can't just 'call' between processes, you need a RPC mechanism.  sockets are
> as good a mechanism as any.

In that case, yes, there are such implementations around. Martijn
mentioned a few, and I mentioned the Pike one, all of which do indeed
bypass libpq and talk directly to the server. It is, as I understand
it, an open and stable protocol, so it's no different from writing a
program that connects to port 25 and talks SMTP rather than dropping
to sendmail.

Chris Angelico

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Questions of the privileges to use the pg_cancel_backend and pg_terminate_backend function. Thanks.

2012-05-20 Thread josepdba
I also needed to give the privilege to execute pg_terminate_backend to
non-superusers and I made it in a separate schema, too. But, to avoid users
killing other user connections I made another function that only gives the
option to kill connections made by the same user that's executing the
function.

I made a post in my blog:
http://dbadailystuff.com/2012/05/12/pg_terminate_backend-for-non-superusers/


--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Questions-of-the-privileges-to-use-the-pg-cancel-backend-and-pg-terminate-backend-function-Thanks-tp5618129p5709232.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Libpq question

2012-05-20 Thread John R Pierce

On 05/20/12 12:52 PM, John Townsend wrote:
By by-passing the "dll" (or "so" on Linux) library I mean you write 
function or procedure calls to the server that is running as a service 
on Windows. You don't use the library with its 160 exported functions. 
You connect directly to the server thus saving one layer of 
protocols.  To do this, you have to translate all the c functions you 
need (not just the headers or ".h" files) into pascal. Not a trivial 
task! 


the database service is a completely separate collection of processes.   
you can't just 'call' between processes, you need a RPC mechanism.  
sockets are as good a mechanism as any.




--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Libpq question

2012-05-20 Thread John Townsend
By by-passing the "dll" (or "so" on Linux) library I mean you write 
function or procedure calls to the server that is running as a service 
on Windows. You don't use the library with its 160 exported functions. 
You connect directly to the server thus saving one layer of protocols.  
To do this, you have to translate all the c functions you need (not just 
the headers or ".h" files) into pascal. Not a trivial task!


Would it be worth it? Depends, but for most situations would not give 
you more speed. With FPC comes a nice utility called h2pas.exe that does 
a decent job and can save you a lot of time. You still have to clean 
some translations.


Look at the *.c & *.h \src\interfaces\libpq directory that comes with 
the postgres source.


John


On 5/20/2012 7:51 AM, zeljko wrote:

John Townsend wrote:


It appears that some developers (Davart) are by-passing the standard
client library, “libpq.dll”, and directly accessing the server using
Delphi or FPC. I am not sure of the advantage here. All libpq.dll

I'm FPC user and I use libpq.so(.dll,.dylib) via zeoslib.
Those who bypass libpq probably uses odbc connections or similar.

zeljko





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] losing schema name in pg_dump

2012-05-20 Thread c k
Hi all,

Many times I have to dump all objects from a schema (single schema holding
only functions and views) in plain text format. It is found that pg_dump
includes a set search_path statement at the beginning and drops all
occurrences of the schema name (to which dumped object belongs) from
function/view bodies even if they are explicitly defined while creating
functions/views.

Functions or view definition from information_schema tables shows the
explicitly defined schema names but pg_dump does not shows them. Is there
any switch to change this behaviour? Or pg_dump has this by default and we
can not change this?
This makes it problematic to maintain the software available for another
RDBMS as for most other RDBMS we have to explicitly define schema names
always.

Thanks,

C P Kulkarni


Re: [GENERAL] suggestion for backup (pg_dump)

2012-05-20 Thread c k
Yes, It is useful. But situation is different. Most of the times objects
are first created, tested and only after finalization can go in the
extension update file. Also it can be difficult not to get the object
definitions from an extension in the backup from pg_dump. A customer could
have to execute commands to create and update extensions for all versions
to current. It can be very difficult.
My suggestion to provide an extra option for pg_dump to include objects of
specific types only along with switch for a schema.
Thanks for help. Looking out if it is really possible to use extension for
our software.

Regards,

C P Kulkarni
On Sun, May 20, 2012 at 12:43 PM, Gabriele Bartolini <
gabriele.bartol...@2ndquadrant.it> wrote:

> Hi,
>
> Il 19/05/12 19:09, c k ha scritto:
>
>  similar tasks and already using it. This problem arises when I have to
>> issue an update script to the client having only function mostly. And as
>> most of the functions are dependent on others and having more than 1100
>> functions it becomes hard to write a list of objects for pg_dump.
>>
> It may sound unrelated at first, but if you have PostgreSQL 9.1 this use
> case matches very well with the new EXTENSION framework. Please look at the
> CREATE EXTENSION command and evaluate it.
>
> Cheers,
> Gabriele
>
> --
>  Gabriele Bartolini - 2ndQuadrant Italia
>  PostgreSQL Training, Services and Support
>  gabriele.bartolini@**2ndQuadrant.it | www.2ndQuadrant.it
>
>


Re: [GENERAL] Locking or Something Else?

2012-05-20 Thread Ian Harding
On Sunday, May 20, 2012, Martijn van Oosterhout wrote:

> On Sun, May 20, 2012 at 12:26:26AM -0700, Ian Harding wrote:
> > I have a situation where an increase in volume of inserts into the
> > main transaction table causes a huge slowdown.  The table has lots of
> > indexes and foreign keys and a trigger.
> >
> > Clearly, something is causing a resource contention issue, but here's
> > my main question:
> >
> > I have log_lock_waits = on  and deadlock_timeout = 250ms.  Is there
> > any way I could have delays of tens of seconds caused by a write
> > conflict in a table accessed by a trigger that would not result in the
> > wait being logged?
>
> The most common cause for slowdowns during inserts is if you're not
> wrapping them into large transactions.  The deadlock timeout only
> tracks deadlocks, it won't trigger on normal lock waits.  There can be
> issues with respect to foreign keys, but I didn't think they are
> triggered on inserts.


I must have misunderstood log_lock_waits.  I thought it logged all lock
waits longer than deadlock_timeout.


>
> If you are wrapping into transactions, then it may be that your disk
> subsystem has slow fsyncs.
>
> Hope this helps,
> --
> Martijn van Oosterhout   >
> http://svana.org/kleptog/
> > He who writes carelessly confesses thereby at the very outset that he
> does
> > not attach much importance to his own thoughts.
>   -- Arthur Schopenhauer
>


Re: [GENERAL] Why are pg_restore taking that long ?

2012-05-20 Thread Josh Kupershmidt
On Sun, May 20, 2012 at 8:39 AM, Poul Møller Hansen  wrote:

>> Anyway, if you are seeing no activity at the end of the restore for quite
>> a while you may want to see if large objects are the reason.
>>
>
> The dump are from a version 9.0.7 and it's being restored in a version
> 9.1.3. There are no large objects.
>
> I'm now doing another restore with the -v switch, and hopefully it will show
> what it is doing at the end.
> There are Bucardo triggers in the database. Perhaps it can be the reason,
> and it should be restored with the "--disable-triggers" switch ?

Possibly. You should be able to tell what's taking so much time by
keeping an eye on your server during the restore. You could set
log_statement = 'all' (or just 'ddl' if there is significant other
activity in other databases) during the restore, and keep an eye on
pg_stat_activity if you need to.


Josh

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why are pg_restore taking that long ?

2012-05-20 Thread Tom Lane
Josh Kupershmidt  writes:
> ... Notice that pg_backup_archiver.c and
> pg_backup_tar.c use an inconsistent log level for this same message,
> which might explain where you saw the message previously.

Seems like that ought to be fixed.

> ... Or change this bit in pg_restore.c to assign a 2
> instead of a 1:
> 
> case 'v':   /* verbose */
> opts->verbose = 1;
>   break;

"verbose" is used as a boolean, so that wouldn't actually help.

> It'd sure be nice to have these programs allow "-" style switches
> to signify cranking up the debugLevel, unless I'm missing some other
> way to do so.

Yeah.  ahlog() also pays attention to a "debugLevel" value, but AFAICS
that's totally vestigial with no way to set it.  It would likely be a
worthwhile activity to rationalize these things into a single
maximum-message-level value that could be ratcheted up with multiple -v
switches.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why are pg_restore taking that long ?

2012-05-20 Thread Poul Møller Hansen


One thing you may want to look at (if this is PG 8.4.x) is the number of
large objects in pg_largeobjects.  If your apps don't use large objects
this is not relevant.  If they do, then it may be.  I've noticed that
pg_restore no longer reports the restoration of individual LO items.  It
used to but no longer.  That means that even with multiple '-v' options
specified it appears that the restore has hung even though it is in fact
adding large objects all the while.  I've also noticed that I can no longer
see the objects as they are added to the db.  I wonder if their restoration
got moved to a single transaction?

Anyway, if you are seeing no activity at the end of the restore for quite
a while you may want to see if large objects are the reason.



The dump are from a version 9.0.7 and it's being restored in a version 
9.1.3. There are no large objects.


I'm now doing another restore with the -v switch, and hopefully it will 
show what it is doing at the end.
There are Bucardo triggers in the database. Perhaps it can be the 
reason, and it should be restored with the "--disable-triggers" switch ?


It would be neat if there was a timestamp on the pg_restore steps in 
verbose mode.



Poul

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] IN vs equality (was Re: odd intermittent query hanging issue)

2012-05-20 Thread Tom Lane
Chris Angelico  writes:
> On Sat, May 19, 2012 at 3:09 AM, Steve Crawford
>  wrote:
>> I also don't understand the "xcrm.channel_id in (1)" instead of
>> "xcrm.channel_id = 1" unless this is a generated query and there could be
>> multiple ids in that condition.

> Side point from this thread. Is there ever any difference between
> these two constructs?

According to transformAExprIn() in parse_expr.c, no.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Libpq question

2012-05-20 Thread Martijn van Oosterhout
On Sun, May 20, 2012 at 02:51:42PM +0200, zeljko wrote:
> John Townsend wrote:
> 
> > It appears that some developers (Davart) are by-passing the standard
> > client library, ???libpq.dll???, and directly accessing the server using
> > Delphi or FPC. I am not sure of the advantage here. All libpq.dll
> 
> I'm FPC user and I use libpq.so(.dll,.dylib) via zeoslib. 
> Those who bypass libpq probably uses odbc connections or similar.

The PostgreSQL-ODBC drivers that I know of use libpq as well.

There are other implementations of of the libpq protocol, the Java lib
being the major one.  There are pure perl/python implementations but
AFAIK they are not widely used.

It's not common to not use libpq.

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> He who writes carelessly confesses thereby at the very outset that he does
> not attach much importance to his own thoughts.
   -- Arthur Schopenhauer


signature.asc
Description: Digital signature


Re: [GENERAL] Locking or Something Else?

2012-05-20 Thread Martijn van Oosterhout
On Sun, May 20, 2012 at 12:26:26AM -0700, Ian Harding wrote:
> I have a situation where an increase in volume of inserts into the
> main transaction table causes a huge slowdown.  The table has lots of
> indexes and foreign keys and a trigger.
> 
> Clearly, something is causing a resource contention issue, but here's
> my main question:
> 
> I have log_lock_waits = on  and deadlock_timeout = 250ms.  Is there
> any way I could have delays of tens of seconds caused by a write
> conflict in a table accessed by a trigger that would not result in the
> wait being logged?

The most common cause for slowdowns during inserts is if you're not
wrapping them into large transactions.  The deadlock timeout only
tracks deadlocks, it won't trigger on normal lock waits.  There can be
issues with respect to foreign keys, but I didn't think they are
triggered on inserts.

If you are wrapping into transactions, then it may be that your disk
subsystem has slow fsyncs.

Hope this helps,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> He who writes carelessly confesses thereby at the very outset that he does
> not attach much importance to his own thoughts.
   -- Arthur Schopenhauer


signature.asc
Description: Digital signature


Re: [GENERAL] Libpq question

2012-05-20 Thread zeljko
John Townsend wrote:

> It appears that some developers (Davart) are by-passing the standard
> client library, “libpq.dll”, and directly accessing the server using
> Delphi or FPC. I am not sure of the advantage here. All libpq.dll

I'm FPC user and I use libpq.so(.dll,.dylib) via zeoslib. 
Those who bypass libpq probably uses odbc connections or similar.

zeljko


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why are pg_restore taking that long ?

2012-05-20 Thread Josh Kupershmidt
On Sat, May 19, 2012 at 10:57 PM, Bosco Rama  wrote:
> Hey Josh,
>
> I found the message I was seeing.  It was/is(?) in StartRestoreBlob() and it
> looks like this:
>
>    ahlog(AH, 2, "restoring large object with OID %u\n", oid);
>
> But I don't know how to find it in the current git tree or how to activate
> it from the command-line (assuming it is still part of the release).

Yup, that's still there, see pg_backup_archiver.c in git head:

http://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/bin/pg_dump/pg_backup_archiver.c;h=e29265953d46aa9730eccca41062dfd9b2c45ba3;hb=f1f6737e154f9d00f1565fc08fd7ac677b380822

The reason that message doesn't show up is it's being logged with
level=2, and with the "-v" or "--verbose" switch, only messages at
level 1 or less get logged. Notice that pg_backup_archiver.c and
pg_backup_tar.c use an inconsistent log level for this same message,
which might explain where you saw the message previously. If you're
using a tar-format backup, you will see the message:

  pg_restore: restoring large object OID 16388

for each LO in verbose mode.

Now, if you're keen on seeing that message for custom-format archives,
you could change that "2" to a "1" in the line you posted from
pg_backup_archiver.c and recompile, and that should give you the
message you're after. Or change this bit in pg_restore.c to assign a 2
instead of a 1:

case 'v':   /* verbose */
opts->verbose = 1;
break;

It'd sure be nice to have these programs allow "-" style switches
to signify cranking up the debugLevel, unless I'm missing some other
way to do so.

Josh

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Locking or Something Else?

2012-05-20 Thread Ian Harding
I have a situation where an increase in volume of inserts into the
main transaction table causes a huge slowdown.  The table has lots of
indexes and foreign keys and a trigger.

Clearly, something is causing a resource contention issue, but here's
my main question:

I have log_lock_waits = on  and deadlock_timeout = 250ms.  Is there
any way I could have delays of tens of seconds caused by a write
conflict in a table accessed by a trigger that would not result in the
wait being logged?

Thanks!

Ian
 PostgreSQL 9.0.6 on x86_64-unknown-linux-gnu, compiled by GCC gcc
(GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] suggestion for backup

2012-05-20 Thread Gabriele Bartolini

Hi,

Il 19/05/12 19:09, c k ha scritto:
similar tasks and already using it. This problem arises when I have to 
issue an update script to the client having only function mostly. And 
as most of the functions are dependent on others and having more than 
1100 functions it becomes hard to write a list of objects for pg_dump.
It may sound unrelated at first, but if you have PostgreSQL 9.1 this use 
case matches very well with the new EXTENSION framework. Please look at 
the CREATE EXTENSION command and evaluate it.


Cheers,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general