Re: [HACKERS] look up tables while parsing queries

2006-02-06 Thread andrew
On 2/5/06, Neil Conway [EMAIL PROTECTED] wrote:
 If you're referring to the raw parser (parser/gram.y), you should not
 attempt to access any tables. For one thing, the raw parser might be
 invoked outside a transaction. The statement might also refer to a table
 created earlier in the same query string, which would mean the
 referenced table would not exist when the latter part of the query
 string is parsed.

 Instead, database access should be done in the analysis phase -- see
 transformStmt() in parser/analyze.c and friends. There are plenty of
 examples in the code of how to access tables, which should be a helpful
 guide.

 -Neil

It is not in the raw parser. I meant inside the transformStmt(). What
is friends? Could  you possibly point out at least one place that
illustrates how to access  tables?

Thanks.

--
andrew

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] slow information schema with thausand users, seq.scan pg_authid

2006-02-06 Thread Pavel Stehule

Hello,

I know so db 500 000 users isn't normal situation, but I need it. After 
user's generation all selects on system's catalog are slow. For example: 
list of sequences


SELECT n.nspname as Schema,
 c.relname as Name,
 CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i' THEN 
'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' END as Type,

 r.rolname as Owner
FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_roles r ON r.oid = c.relowner
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('S','')
 AND n.nspname NOT IN ('pg_catalog', 'pg_toast')
 AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 1,2;

  QUERY PLAN
-
Sort  (cost=47532.09..47544.59 rows=5000 width=193) (actual 
time=30333.490..30333.504 rows=5 loops=1)

  Sort Key: n.nspname, c.relname
  -  Hash Left Join  (cost=1.06..46947.04 rows=5000 width=193) (actual 
time=45.918..30333.390 rows=5 loops=1)

Hash Cond: (outer.relnamespace = inner.oid)
Filter: (inner.nspname  ALL ('{pg_catalog,pg_toast}'::name[]))
-  Nested Loop Left Join  (cost=0.00..46795.97 rows=5000 
width=133) (actual time=28.648..30316.020 rows=5 loops=1)

  Join Filter: (inner.oid = outer.relowner)
  -  Seq Scan on pg_class c  (cost=0.00..9.59 rows=2 width=73) 
(actual time=16.212..165.521 rows=5 loops=1)
Filter: ((relkind = ANY ('{S,}'::char[])) AND 
pg_table_is_visible(oid))
  -  Seq Scan on pg_authid  (cost=0.00..12143.06 rows=56 
width=118) (actual time=12.702..4306.537 rows=56 loops=5)
-  Hash  (cost=1.05..1.05 rows=5 width=68) (actual 
time=0.070..0.070 rows=5 loops=1)
  -  Seq Scan on pg_namespace n  (cost=0.00..1.05 rows=5 
width=68) (actual time=0.013..0.035 rows=5 loops=1)

Total runtime: 30376.547 ms

there is any possibility creating index for pg_authid?

best regards
Pavel Stehule

_
Chcete sdilet sve obrazky a hudbu s prateli? http://messenger.msn.cz/


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [GENERAL] Logging statements and parameter values

2006-02-06 Thread Simon Riggs
On Mon, 2006-01-30 at 17:19 -0500, Bruce Momjian wrote:
 Ted Powell wrote:
  On Mon, Jan 30, 2006 at 04:31:29PM -0500, Bruce Momjian wrote:
   
   I assume it is this TODO:
   
 * Allow protocol-level BIND parameter values to be logged
   
   
   ---
   
   Ted Powell wrote:
Our development group needs to have the option of logging all SQL
statements including substituted parameter values. [...]
  
  That's it! (I should have thought to look in the TODO.)
  
  Has any design work been done on this?
 
 No.  I am with Simon Riggs today at my house and I asked him, hoping he
 can get it done for 8.2.  I don't think it is very hard.

Some more detailed thoughts:

1. Do we want to log parameters at Bind time or at Execution time? Bind
is easier and more correct, but might look a little strange in the log
since the parameters would be logged before the execution appears. IMHO
Bind time is more correct. That would mean we have a separate line for
logged parameters, e.g.
parameters: p1=111 p2=hshssh p3=47000.5

2. Should we save them until end of execution, so we can output them on
the same line as log_min_duration_statement queries? Sounds easier but
the meaning might be more confused.

3. Do we want to log parameters that are used for planning, but no
others? Sometimes yes, sometimes no, I think.

Sounds like we need:
- a log_parameters GUC with settings of: none, plan and all. 
- output log messages at Bind time on a separate log line, which would
replace the existing statement: [protocol] BIND message with
(portalname) parameters: p1=111 p2=hshssh p3=47000.5
- portalname would be blank if we aren't using named portals

While we're discussing logging, I also want to be able to set
log_min_duration_statement on a user by user basis (i,e, for individual
applications). We set this to superuser-only for valid security reasons,
but I'd like to have the ability for the superuser to relax that
restriction for short periods, or even permanently on development
servers. That sounds like another GUC: log_security = on
which would enforce SUSET/USERSET control (and would need to be a SIGHUP
parameter).

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] [GENERAL] Logging statements and parameter values

2006-02-06 Thread Csaba Nagy
Simon,

For me the usage pattern would be: log all params, bind time values, on
the same log line as log_min_duration entries. That's what I need to
know which are the non-performant queries, and it also helps on
occasions to identify application problems. 

In any case all your plans sound very good, can't wait to have it
working :-)

Thanks,
Csaba.

On Mon, 2006-02-06 at 12:43, Simon Riggs wrote:
 On Mon, 2006-01-30 at 17:19 -0500, Bruce Momjian wrote:
  Ted Powell wrote:
   On Mon, Jan 30, 2006 at 04:31:29PM -0500, Bruce Momjian wrote:

I assume it is this TODO:

* Allow protocol-level BIND parameter values to be logged


---

Ted Powell wrote:
 Our development group needs to have the option of logging all SQL
 statements including substituted parameter values. [...]
   
   That's it! (I should have thought to look in the TODO.)
   
   Has any design work been done on this?
  
  No.  I am with Simon Riggs today at my house and I asked him, hoping he
  can get it done for 8.2.  I don't think it is very hard.
 
 Some more detailed thoughts:
 
 1. Do we want to log parameters at Bind time or at Execution time? Bind
 is easier and more correct, but might look a little strange in the log
 since the parameters would be logged before the execution appears. IMHO
 Bind time is more correct. That would mean we have a separate line for
 logged parameters, e.g.
 parameters: p1=111 p2=hshssh p3=47000.5
 
 2. Should we save them until end of execution, so we can output them on
 the same line as log_min_duration_statement queries? Sounds easier but
 the meaning might be more confused.
 
 3. Do we want to log parameters that are used for planning, but no
 others? Sometimes yes, sometimes no, I think.
 
 Sounds like we need:
 - a log_parameters GUC with settings of: none, plan and all. 
 - output log messages at Bind time on a separate log line, which would
 replace the existing statement: [protocol] BIND message with
 (portalname) parameters: p1=111 p2=hshssh p3=47000.5
 - portalname would be blank if we aren't using named portals
 
 While we're discussing logging, I also want to be able to set
 log_min_duration_statement on a user by user basis (i,e, for individual
 applications). We set this to superuser-only for valid security reasons,
 but I'd like to have the ability for the superuser to relax that
 restriction for short periods, or even permanently on development
 servers. That sounds like another GUC: log_security = on
 which would enforce SUSET/USERSET control (and would need to be a SIGHUP
 parameter).
 
 Best Regards, Simon Riggs
 
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] [ADMIN] How to determine date / time of last postmaster restart

2006-02-06 Thread Tom Lane
adey [EMAIL PROTECTED] writes:
 Please let me know if there is a way to determine when the Postmaster was
 last restarted?

The last postmaster start time, or the last database reset?  These are
not the same if any backends have crashed since the postmaster started.
For determining stats lifespan I think you need the latter.

Offhand I think the file timestamp of $PGDATA/postmaster.opts would do
for the postmaster start time, and postmaster.pid for the other (I think
postmaster.pid is updated during a reset).

PG 8.1 will have a function to return postmaster start time, but not
database reset time.  I wonder if this is misdefined --- if you are
trying to measure database uptime, the last reset would be more
appropriate to track.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Shared memory and memory context question

2006-02-06 Thread Richard Hills
On Mon February 6 2006 05:17, Mark Woodward wrote:
 I posted some source to a shared memory sort of thing to the group, as
 well as to you, I believe.

Indeed, and it looks rather interesting. I'll have a look through it 
when I 
have a chance...

So, after more discussion and experimentation, the possible methods in 
order 
of +elegance/-difficulty/-complexity are:

=1. OSSP supported shared mem, possibly with a pg memory context or Mark's 
shared memory manager.
=1. Separate application which the postgres backends talk to over tcp (which 
actually turns out to be quite a clean way of doing it).
3. Storing rules in db and reloading them each time (which turns out to be a 
utter bastard to do).
4. Shared memory with my own memory manager.

I am *probably* going to go for the separate network application, as I 
believe this is easy and relatively clean, as the required messages should be 
fairly straightforward. Each postgres backend opens a connection to the 
single separate rules-server which sends back a serious of commands 
(probably SQL), before the connection is closed again.

If this is Clearly Insane - please let me know!

Regards,

Richard

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] slow information schema with thausand users, seq.scan pg_authid

2006-02-06 Thread Peter Eisentraut
Pavel Stehule wrote:
 I know so db 500 000 users isn't normal situation, but I need it.
 After user's generation all selects on system's catalog are slow. For
 example: list of sequences

 SELECT n.nspname as Schema,
   c.relname as Name,
   CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i'
 THEN 'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' END as
 Type, r.rolname as Owner
 FROM pg_catalog.pg_class c
  LEFT JOIN pg_catalog.pg_roles r ON r.oid = c.relowner
  LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
 WHERE c.relkind IN ('S','')
   AND n.nspname NOT IN ('pg_catalog', 'pg_toast')
   AND pg_catalog.pg_table_is_visible(c.oid)
 ORDER BY 1,2;

I suggest that your problem is the join order (unless you have 50 
tables as well).  Moreover, using left joins instead of inner joins 
seems to be quite useless unless you plan to have tables that are not 
owned by anyone and are not in a schema.

 there is any possibility creating index for pg_authid?

It already has indexes.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] slow information schema with thausand users, seq.scan pg_authid

2006-02-06 Thread Andrew - Supernews
On 2006-02-06, Peter Eisentraut [EMAIL PROTECTED] wrote:
 I suggest that your problem is the join order (unless you have 50 
 tables as well).  Moreover, using left joins instead of inner joins 
 seems to be quite useless unless you plan to have tables that are not 
 owned by anyone and are not in a schema.

Perhaps you missed the fact that the query was not one that he wrote,
but is the query that psql uses for \ds ?

 there is any possibility creating index for pg_authid?

 It already has indexes.

True, but they're not being used where you'd expect. This seems to be
something to do with the fact that it's not pg_authid which is being
accessed, but rather the view pg_roles.

-- 
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Shared memory and memory context question

2006-02-06 Thread Mark Woodward
 On Mon February 6 2006 05:17, Mark Woodward wrote:
 I posted some source to a shared memory sort of thing to the group, as
 well as to you, I believe.

   Indeed, and it looks rather interesting. I'll have a look through it 
 when
 I
 have a chance...

   So, after more discussion and experimentation, the possible methods in
 order
 of +elegance/-difficulty/-complexity are:

 =1. OSSP supported shared mem, possibly with a pg memory context or Mark's
 shared memory manager.
 =1. Separate application which the postgres backends talk to over tcp
 (which
 actually turns out to be quite a clean way of doing it).

If you hop on over to http://www.mohawksoft.org, you'll see a server
application called MCache. MCache is written to handle *exactly* the
sort of information you are looking to manage. Its primary duty is to
manage highly concurrent/active sessions for a large web cluster. I have
also been working on a PostgreSQL extension for it. It needs to be fleshed
out and, again, some heavy duty QA, but works on my machine.

I alluded to releasing an extension module for PostgreSQL, I'm actually
working on a much larger set of projects intended to tightly integrate
PostgreSQL, web servers (PHP right now), and a set of service applications
including search and recommendations. In another thread I wanted to add an
extension, xmldbx, to postgresql's contrib dir. Anyway, I digress.

If anyone is interested in lending a hand in QA, examples, and so on, I'd
be glad to take this off line.


 3. Storing rules in db and reloading them each time (which turns out to be
 a
 utter bastard to do).
 4. Shared memory with my own memory manager.

If you have time and the inclanation to so, it is a fund sort of thing to
write.


   I am *probably* going to go for the separate network application, as I
 believe this is easy and relatively clean, as the required messages should
 be
 fairly straightforward. Each postgres backend opens a connection to the
 single separate rules-server which sends back a serious of commands
 (probably SQL), before the connection is closed again.

   If this is Clearly Insane - please let me know!

It isn't a bad idea at all. For MCache, I leave the socket connection open
for the next use of the PostgreSQL session. Web environments usually keep
a cache of active database connections to save the overhead of connecting
each time. You just need to be careful when you clean up.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [ADMIN] Postgres 8.1.x and MIT Kerberos 5

2006-02-06 Thread Mohan K
Hello Magnus,Regarding the configure issue:The platform is Tru64 Unix 5.1b, the problem I had was we havecompiled our Kerberos build statically and is installed in a directory other than the standard location. The trick adding to LIBS
did not work as it (krb5support) library needs to come after theother libs (is there a way to control that?).As far as the security issue with Kerberos, here is the relevant thread
http://mailman.mit.edu/pipermail/kerberos/2002-October/002043.htmlI am sorry it was in Kerberos mailing list not Postgres.On 2/5/06, Magnus Hagander [EMAIL PROTECTED]
 wrote:  Greetings, I was trying to build source build postgres 8.1.x with MIT  Kerberos 5 1.4.x implementation.  The whole thing bombs out. After some digging, I had to hack
  the autoconf script (configure.in) to properly account for  the way the libraries are built for 1.4.x. I don't know  whether an earlier post had the same issue. I think it boils
  down to adding the 'libkrb5support' when all the krb5 libs  are checked in the configure script.  (This is better asked in -hackers, I htink, copying there)  What platform is this? I use it with krb5 
1.4.3 on Linux (slackware) without any modifications at all. Perhaps platform specific behaviour?  The postmaster is linked to libkrb5support, but I only have -lkrb5 in my LIBS as generated by configure. However, if I do ldd on 
libkrb5.so I see that one pulls in libkrb5support.On another note, is the kerberos authentication secure, I had  searched some old threads, where it was indicated the
  principal is not checked by the db as a valid user. Is this  still the case?  The principal name is definitly checked by the db as a valid user, and AFAIK it always has been (do you have a reference to where it says it
 doesn't?)  The *REALM* is not checked, however. This can cause problems if you have a multi-realm system (where the realms already trust each other, because the KDC has to give out the service ticket) where you have the same
 username existing in multiple realms representing different users.  If you're in a single realm, it's definitly secure.  //Magnus 


Re: [HACKERS] [ADMIN] Postgres 8.1.x and MIT Kerberos 5

2006-02-06 Thread Magnus Hagander
 Hello Magnus,
 
 Regarding the configure issue:
  The platform is Tru64 Unix 5.1b, the problem I had was we 
 have compiled our Kerberos build statically and is installed 
 in a directory other than the standard location. The trick 
 adding to LIBS did not work as it (krb5support) library needs 
 to come after the other libs (is there a way to control that?).

Ok. Someone more autoconfy than me will have to say something about that
:-)


 As far as the security issue with Kerberos, here is the 
 relevant thread
 
 http://mailman.mit.edu/pipermail/kerberos/2002-October/002043.
 html 
 http://mailman.mit.edu/pipermail/kerberos/2002-October/002043.html 
 
 I am sorry it was in Kerberos mailing list not Postgres.

Ah, that explains me not seeing it.

If you need protection against mitm and connection security, you should
use TLS.  We don't use Kerberos for encryption.

//Magnus

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [ADMIN] Postgres 8.1.x and MIT Kerberos 5

2006-02-06 Thread Mohan K
Thanks.As far as using TLS, it is good approach. Although, we don't need complete channel encryption for every transaction or query. I am looking at a more granular approach whereI can decide depending on the security of information exchange whether to encrypt the 
channel or not (like using maybe GSSAPI). Is this something that will be considered downthe line?MohanOn 2/6/06, Magnus Hagander 
[EMAIL PROTECTED] wrote: Hello Magnus, Regarding the configure issue:
The platform is Tru64 Unix 5.1b, the problem I had was we have compiled our Kerberos build statically and is installed in a directory other than the standard location. The trick adding to LIBS did not work as it (krb5support) library needs
 to come after the other libs (is there a way to control that?).Ok. Someone more autoconfy than me will have to say something about that:-) As far as the security issue with Kerberos, here is the
 relevant thread http://mailman.mit.edu/pipermail/kerberos/2002-October/002043. html 
http://mailman.mit.edu/pipermail/kerberos/2002-October/002043.html I am sorry it was in Kerberos mailing list not Postgres.Ah, that explains me not seeing it.If you need protection against mitm and connection security, you should
use TLS.We don't use Kerberos for encryption.//Magnus


Re: [HACKERS] [ADMIN] Postgres 8.1.x and MIT Kerberos 5

2006-02-06 Thread Peter Eisentraut
Mohan K wrote:
 Regarding the configure issue:
  The platform is Tru64 Unix 5.1b, the problem I had was we have
 compiled our Kerberos build statically and is installed in a
 directory other than the standard location. The trick adding to LIBS
 did not work as it (krb5support) library needs to come after the
 other libs (is there a way to control that?).

I think what this comes down to is that we don't support static builds 
very well at all.  So you will have to resort to editing 
Makefile.global yourself after configuring.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [GENERAL] Logging statements and parameter values

2006-02-06 Thread Simon Riggs
On Mon, 2006-02-06 at 13:28 +0100, Csaba Nagy wrote:

 For me the usage pattern would be: log all params, bind time values, on
 the same log line as log_min_duration entries. That's what I need to
 know which are the non-performant queries, and it also helps on
 occasions to identify application problems. 

You remind me that two sets of parameters could be logged

Trouble is, you need to know both
- parameters that were used to plan the query
- parameters used for this execution of the query
since very often it is the combination that is the problem. i.e. it was
a good query at plan time and if re-planned would also be a good query,
but running the earlier plan with the later set of parameters is bad.

Perhaps it would be useful to store the parameters that were used to
plan the query with the portal, so we could have an option to say and
with what parameters was this query planned. That would then sensibly
appear on the log_min_messages log line, as you suggest. This is
important for diagnosing many run-time issues accurately.

Maybe we should expand the list to
log_parameters = none | bind | plan | exec | all

bind = log parameters directly at bind time, using a separate log line;
do not store them. Useful mainly as an interface debugging aid.

plan = store parameters used for planning with portal, then output them
with the statement at log_min_message time 
e.g. (plan parms: p1= p2= ...) - which is very similar to what we do now
with spitting out the SQL statement since that is not resent for each
execution

exec = store the parameters used at bind time and output them with the
statement e.g. (exec parms: p1= p2=)

all = store the parameters used at bind time and output them with the
statement, as well as the exec parms e.g. (plan parms: p1= p2= ...)(exec
parms: p1= p2=)

none = nada, not even the current log lines for bind

Sounds too much, but you don't really want all of that all of the time.

Best Regards, Simon Riggs



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] slow information schema with thausand users, seq.scan pg_authid

2006-02-06 Thread Peter Eisentraut
Andrew - Supernews wrote:
 Perhaps you missed the fact that the query was not one that he wrote,
 but is the query that psql uses for \ds ?

I did miss that.  Perhaps with dependency tracking and all, we don't 
need the left joins anymore?

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [ADMIN] Postgres 8.1.x and MIT Kerberos 5

2006-02-06 Thread Mohan K
Peter,It is chicken and egg problem, I still need to enable kerberos in the configure script to make sure proper pieces are picked up. But of coursethe configure script fails :).If I provide the relevant patches to 
configure.in script is that acceptable?thanksOn 2/6/06, Peter Eisentraut 
[EMAIL PROTECTED] wrote:Mohan K wrote: Regarding the configure issue:
The platform is Tru64 Unix 5.1b, the problem I had was we have compiled our Kerberos build statically and is installed in a directory other than the standard location. The trick adding to LIBS did not work as it (krb5support) library needs to come after the
 other libs (is there a way to control that?).I think what this comes down to is that we don't support static buildsvery well at all.So you will have to resort to editingMakefile.global yourself after configuring.
--Peter Eisentrauthttp://developer.postgresql.org/~petere/


Re: [HACKERS] Copy From Insert UNLESS

2006-02-06 Thread James William Pye
On Sun, Feb 05, 2006 at 07:14:49PM -0800, Stephan Szabo wrote:
 On Sun, 5 Feb 2006, James William Pye wrote:
  However, constraints referenced in an UNLESS clause that are deferred, in 
  any
  fashion, should probably be immediated within the context of the command.
  Perhaps a WARNING or NOTICE would be appropriately informative if UNLESS 
  were
  to actually alter the timing of a given constraint.
 
 The problem is that even immediate constraints are supposed to be checked
 at end of statement, not at row time.

I see. Immediated is not the word that I am actually looking for then. :(
Perhaps Postgres should specify our current immediate as a new constraint 
mode.
instant, maybe? Sadly, I think it will be difficult to get away from using 
that or
some other synonym if such an idea were to be implemented.

[Getting the feeling that this has been discussed before. ;]

 Our implementation of UNIQUE is particularly bad for this.

Yes. Changing how UNIQUE constraints are implemented will likely be the first
step in this patch.

  Any facility that can alter the tuple before it being inserted into the heap
  should probably be exercised prior to the application of the tuple against
  UNLESS's behavior.
 
 The problem is that you can un-violate a unique constraint by changing
 some other row that's already in the table. And I think that it might even
 be legal to do so in an after trigger (and in fact, some other row's after
 trigger).
 [join]
 Basically a violation at the time the row is
 created is irrelevant if the violation is gone by the end of statement.

Okay. I can't help but think such a trigger as being questionable at best.
However, per spec, it should be possible. =\

 This isn't necessarily a killer to the idea though, it probably just means
 the semantics are harder to nail down.

Aye. I figured there would be some details that might take a while.


Once the UNIQUE constraint code is relocated, I think implementing more
standards compliant constraint timing might be substantially easier. However, I
don't think this should effect UNLESS. Rather, I think UNLESS should, more or
less, demand that specified constraints be checked at the same time as they are
currently. This is meant to be an optimization at multiple levels; reduce code
redundancy(rewriting constraint checks for use prior to the actual insertion),
computational redundancy(potentially, running the rewritten checks more than
once), and reduce unnecessary I/O(avoiding heap_insert()'ing an evil tuple
into the target table despite the fact that the statement may later inviolate
it). Although, perhaps, it could be configurable with an option;
INSERT INTO t UNLESS [DEFERRED] CONSTRAINT VIOLATION. =)
-- 
Regards, James William Pye

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] slow information schema with thausand users, seq.scan pg_authid

2006-02-06 Thread Tom Lane
Peter Eisentraut [EMAIL PROTECTED] writes:
 Andrew - Supernews wrote:
 Perhaps you missed the fact that the query was not one that he wrote,
 but is the query that psql uses for \ds ?

 I did miss that.  Perhaps with dependency tracking and all, we don't 
 need the left joins anymore?

I don't see anything wrong with leaving the left joins as-is, on the
grounds that

1. the planner can simplify the left joins to inner joins, eg the
join to pg_namespace should be simplified on the strength of the
test on nspname.  (This seems to be broken in HEAD, but it does
work in 8.1 --- I think I broke it with the changes to treat IN
as a ScalarArrayOp.  Will fix.)

2. HEAD also knows how to change the order of the left joins at need.

The real question to me is why the planner doesn't want to use the
index on pg_authid.oid.  That's pretty curious ...

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Copy From Insert UNLESS

2006-02-06 Thread Josh Berkus

James,

Are you sure that a new type of constraint is the way to go for this? 
It doesn't solve our issues in the data warehousing space.  The spec we 
started with for Error-tolerant COPY is:


1) It must be able to handle parsing errors (i.e. bad char set);
2) It must be able to handle constraint violations;
3) It must output all row errors to a log or errors table which makes 
it possible to determine which input row failed and why;
4) It must not slow significantly (like, not more than 15%) the speed of 
bulk loading.


On that basis, Alon started working on a low-level error trapper for 
COPY.   It seems like your idea, which would involve a second constraint 
check, would achieve neigher #1 nor #4.


--Josh Berkus

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] slow information schema with thausand users, seq.scan pg_authid

2006-02-06 Thread Tom Lane
Andrew - Supernews [EMAIL PROTECTED] writes:
 On 2006-02-06, Peter Eisentraut [EMAIL PROTECTED] wrote:
 It already has indexes.

 True, but they're not being used where you'd expect. This seems to be
 something to do with the fact that it's not pg_authid which is being
 accessed, but rather the view pg_roles.

I looked into this and it seems the problem is that the view doesn't
get flattened into the main query because of the has_nullable_targetlist
limitation in prepjointree.c.  That's triggered because pg_roles has
''::text AS rolpassword
which isn't nullable, meaning it would produce wrong behavior if
referenced above the outer join.

Ultimately, the reason this is a problem is that the planner deals only
in simple Vars while processing joins; it doesn't want to think about
expressions.  I'm starting to think that it may be time to fix this,
because I've run into several related restrictions lately, but it seems
like a nontrivial project.

In the meantime, reducing the LEFT JOIN to pg_roles to a JOIN as per
Peter's suggestion seems like the best short-term workaround.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Copy From Insert UNLESS

2006-02-06 Thread Stephan Szabo
On Mon, 6 Feb 2006, Josh Berkus wrote:

 Are you sure that a new type of constraint is the way to go for this?
 It doesn't solve our issues in the data warehousing space.  The spec we
 started with for Error-tolerant COPY is:

 1) It must be able to handle parsing errors (i.e. bad char set);
 2) It must be able to handle constraint violations;
 3) It must output all row errors to a log or errors table which makes
 it possible to determine which input row failed and why;
 4) It must not slow significantly (like, not more than 15%) the speed of
 bulk loading.

 On that basis, Alon started working on a low-level error trapper for
 COPY.   It seems like your idea, which would involve a second constraint
 check, would achieve neigher #1 nor #4.

I think in his system it wouldn't check the constraints twice, it'd just
potentially check them at a different time than the normal constraint
timing, so I think it'd cover #4. I'd wonder if there'd be any possibility
of having violations get unnoticed in that case, but I'm not coming up
with an obvious way that could happen.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Copy From Insert UNLESS

2006-02-06 Thread Stephan Szabo
On Mon, 6 Feb 2006, James William Pye wrote:

 On Sun, Feb 05, 2006 at 07:14:49PM -0800, Stephan Szabo wrote:
  On Sun, 5 Feb 2006, James William Pye wrote:
   However, constraints referenced in an UNLESS clause that are deferred, in 
   any
   fashion, should probably be immediated within the context of the 
   command.
   Perhaps a WARNING or NOTICE would be appropriately informative if UNLESS 
   were
   to actually alter the timing of a given constraint.
 
  The problem is that even immediate constraints are supposed to be checked
  at end of statement, not at row time.

 I see. Immediated is not the word that I am actually looking for then. :(
 Perhaps Postgres should specify our current immediate as a new constraint 
 mode.
 instant, maybe? Sadly, I think it will be difficult to get away from using 
 that or
 some other synonym if such an idea were to be implemented.

 [Getting the feeling that this has been discussed before. ;]

Only parts of it. :)

  Our implementation of UNIQUE is particularly bad for this.

 Yes. Changing how UNIQUE constraints are implemented will likely be the first
 step in this patch.

   Any facility that can alter the tuple before it being inserted into the 
   heap
   should probably be exercised prior to the application of the tuple against
   UNLESS's behavior.
 
  The problem is that you can un-violate a unique constraint by changing
  some other row that's already in the table. And I think that it might even
  be legal to do so in an after trigger (and in fact, some other row's after
  trigger).
  [join]
  Basically a violation at the time the row is
  created is irrelevant if the violation is gone by the end of statement.

 Okay. I can't help but think such a trigger as being questionable at best.
 However, per spec, it should be possible. =\

Yeah, it's pretty odd in the insert case.  It's easy in the update case to
make a case where it matters, definately less so for insert.

 Once the UNIQUE constraint code is relocated, I think implementing more
 standards compliant constraint timing might be substantially easier. However, 
 I
 don't think this should effect UNLESS. Rather, I think UNLESS should, more or
 less, demand that specified constraints be checked at the same time as they 
 are
 currently. This is meant to be an optimization at multiple levels; reduce code
 redundancy(rewriting constraint checks for use prior to the actual insertion),
 computational redundancy(potentially, running the rewritten checks more than
 once), and reduce unnecessary I/O(avoiding heap_insert()'ing an evil tuple
 into the target table despite the fact that the statement may later 
 inviolate
 it). Although, perhaps, it could be configurable with an option;
 INSERT INTO t UNLESS [DEFERRED] CONSTRAINT VIOLATION. =)

I'd say that if we were going to check the constraints at a different
time, we'd want a better name/description than UNLESS CONSTRAINT
VIOLATION since the unadorned INSERT or COPY might run with no constraint
violations.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [PORTS] Failed install - libgen.so doesn't exist

2006-02-06 Thread Chris Browne
kleptog@svana.org (Martijn van Oosterhout) writes:
 On Sat, Feb 04, 2006 at 01:54:52AM +0100, Peter Eisentraut wrote:
 I took a first swing at this and rearranged some of these calls.

 ld -- On AIX at least this seems to be some magic library but doesn't 
 have an obvious testable symbol.

 Indeed, appears to be AIX only.

Further, it appears to be AIX pre-4.3 only, when using it for dlopen()
replacement...

It would be an attractive idea to have configure detect not whether
it's open, but rather whether it is needed, and leave it out for AIX
4.3 and better...

 Apparently -lbsd is used on AIX but it contains pow() which is the
 wrong one (the comment wasn't clear). -lBSD was for hpux. Linux used to
 use it but not anymore.

Seneca didn't notice it being picked up; it may be that it is only
detected and used on old versions of AIX...

 PW -- might be a compatibility library but none of the functions I found 
 are used by PostgreSQL.

 Listed for old SCO. No idea what for though.

Apparently this is for compatibility with the ATT Programmers
Workbench toolkit; probably not too relevant to anyone these days...

On AIX, it gets detected, but functions are never used.  I'll bet the
same is true on some other platforms (Solaris, HP/UX, and such).
-- 
cbbrowne,@,acm.org
http://cbbrowne.com/info/nonrdbms.html
Smith's Test for Artificial Life:
When animal-rights activists and right-to-life protesters are marching
outside your laboratory, then you know you've definitely made progress
in your artificial life research.  -- Donald A. Smith

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Copy From Insert UNLESS

2006-02-06 Thread Alon Goldshuv
 Alon Goldshuv on Bizgres has been working on this as well. Maybe you
 could collaborate?  Alon?

I would love to collaborate. The proposal is neat, however, I am not too
excited about handling errors in such high granularity, as far as the user
is concerned. I am more on the same line with Tom Lane's statement in
Simon's thread (Practical error logging for very large COPY statements):

The general problem that needs to be solved is trap any error that
occurs during attempted insertion of a COPY row, and instead of aborting
the copy, record the data and the error message someplace else.  Seen
in that light, implementing a special path for uniqueness violations is
pretty pointless.

But, I definitely share your struggle to finding a good way to handle those
unique/FK constraints...

Out of your current possible known solutions list:

. Temporary table that filters out the evil tuples.
. BEFORE TRIGGER handling the tuple if the constraint of interest is
violated.
. INSERT wrapped in a subtransaction.
. (Other variations)

I really don't like Temporary tables (too much user intervention) or
subtransactions (slw). I also don't like using pg_loader for that
manner, as although it's a nice tool, isolating errors with it for large
data sets is impractical.

I guess the BEFORE TRIGGER is the closest solution to what I would like to
achieve. I think something can be done even without a trigger. We could trap
any of the following:

- bad data (any error before the tuple can be created).
- domain constraints
- check constraints
- NOT NULL constraints

As far as UNIQUE goes, maybe there is a good way to do a bt scan against the
index table right before the simple_heap_insert call? Hopefully without too
much code duplication. I am not too familiar with that code, so I don't have
a very specific idea yet. I don't know how much slower things will become
with this extra scan (I would think it will still be simpler and faster than
a subtransaction), but I figure that there is a price to pay if you want
single row error isolation. Otherwise, if the user wants to run COPY like it
is currently (all data rows or nothing) they could still do it in the same
speed using the current code path, bypassing the extra scan.

Not sure this way very helpful, but these are my thoughts at this moment.

Regards,
Alon.




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Copy From Insert UNLESS

2006-02-06 Thread James William Pye
On Mon, Feb 06, 2006 at 11:03:06AM -0800, Josh Berkus wrote:
 Are you sure that a new type of constraint is the way to go for this?

[Thinking that you are referring to the new constraint mode that I was
confusingly referring to...]

Well, it really wouldn't be new. It's just labeling what we do now as something
other than immediate. Considering that immediate constraints are meant to be
checked at the end of the SQL-statement, and our implementation of immediate is
truly immediate, as Stephan pointed out to me. However, I think our current
timing method is better for normal cases, at least for Postgres, than what the
spec specifies.
[See pages 63-66: The second paragraph in 4.17.2 Checking of constraints]

Ultimately, I don't care about this very much. However, I think an
implementation of my proposal would aid in implementing spec compliant
immediate timing.

[If I misunderstood what you were getting at, sorry. :]

 It doesn't solve our issues in the data warehousing space.  The spec we 
 started with for Error-tolerant COPY is:

 1) It must be able to handle parsing errors (i.e. bad char set);

My proposal did not handle this, and purposefully so. A constraint violation,
while inhibiting insertion into the target table would still yield a kosher
tuple--just not okay for that table, which could then be dropped or redirected
using the THEN INSERT INTO into another precisely structured table for later
analysis. Bad data errors would not even have a tuple to work with in the first
place, which is why I wanted to draw a distinction.

I think having something to handle bad data is useful, but I think it should be
distinct, syntactically and implementation-wise, from constraint violations.

That's not to say that it couldn't fit into the model that UNLESS would try to
create:
 COPY ... UNLESS BAD DATA [ON COLUMN (y)] OR CONSTRAINT VIOLATION [ON (z)] ...

 2) It must be able to handle constraint violations;

Check. :)

 3) It must output all row errors to a log or errors table which makes 
 it possible to determine which input row failed and why;

Check; save data errors for now.

 4) It must not slow significantly (like, not more than 15%) the speed of 
 bulk loading.

Check. (See below)

 It seems like your idea, which would involve a second constraint 
 check, would achieve neigher #1 nor #4.

I'm not proposing that a second constraint check should be made.

The difficulty of my implementation comes from the position that I don't think
the current implementation of UNIQUE constraints is ideal. It is hidden
inside nbtree, which, while convenient, is not likely to be the best place for
it. I believe my original letter covered this by proposing a new pg_am column;
one that would hold a regproc that would be able to 'scan for insert' and return
the state(position, locks, whether an entry exists, anything else necessary for
a quick insert) of that scan to the caller for later use in the actual insert or
update. All other constraints appear to require trivial modifications to get it
to work with UNLESS without any redundancy.
-- 
Regards, James William Pye

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Copy From Insert UNLESS

2006-02-06 Thread Josh Berkus
James,

 The difficulty of my implementation comes from the position that I don't
 think the current implementation of UNIQUE constraints is ideal. It is
 hidden inside nbtree, which, while convenient, is not likely to be the
 best place for it. 

Agreed; one of the things that's been on the TODO list for quite a while is 
real deferrable unique constraints that would allow for (for example) 
reordering of a UNIQUE column inside a transaction.

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[HACKERS] Problems with createlang - windows

2006-02-06 Thread Márcio A . Sepp










Hi,





There is something wrong with createlang on my
windows system. Please, see the output:



C:\Arquivos de
programas\PostgreSQL\8.1\bin\createlang.exe pltcl -U postgres

test

createlang: language installation failed:
ERROR: could not load library C:/Arq

uivos de programas/PostgreSQL/8.1/lib/pltcl.dll: NÒo
foi possÝvel encontrar o m

¾dulo especificado.

C:\



Translating it to English, that is something like:
/lib/plctl.dll. It is not possible to find the specific
module. 



Looking the directory, I can
see the file plctl.dll:

C:\Arquivos de
programas\PostgreSQL\8.1\libdir pltcl*

 O volume na unidade C não tem nome.

 O número de série do volume é 1442-C8D0



 Pasta de C:\Arquivos de
programas\PostgreSQL\8.1\lib



06/01/2006  10:59    56.235
pltcl.dll

   1 arquivo(s) 56.235
bytes

   0 pasta(s) 22.882.418.688
bytes disponíveis



C:\Arquivos de
programas\PostgreSQL\8.1\lib





It is important to say that the same instruction to
plpgsql run ok. See:



C:\Arquivos de
programas\PostgreSQL\8.1\bin\createlang.exe plpgsql -U postgr

es test



C:\



I think that there is a problem to install the
language plctl. 



If can I help with more
information, please, email me.





Att.

Márcio A. Sepp










Re: [HACKERS] [Bizgres-general] WAL bypass for INSERT, UPDATE and

2006-02-06 Thread Simon Riggs
On Fri, 2006-02-03 at 22:29 -0500, Bruce Momjian wrote:

 Based on this, I think we should just implement the TRUNCATE/DROP option
 for the table, and avoid the idea of allowing non-logged operations on a
 table that has any data we want recovered after a crash.

Well the only other option is this:

Implement an UNDO at abort like we do for DDL, which would truncate the
table back down to the starting point. That can be made to work for both
cases.

In addition if the starting point was  0 then we'd need to perform a
VACUUM style operation to remove any index pointers with tids into the
to-be-truncated blocks. That would then make it work for the
with-indexes and/or with-toast cases.

If starting point == 0 we would just truncate the indexes and toast
stuff too.

Most importantly we'd need to do this at recovery time. That bit will
take a bit of work to make it happen right, but seems doable.

So we cover both cases at once, using one lot of logic. But there is a
fair amount of work there, so I'll need to consider whether its 8.2
material or not.

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Problems with createlang - windows

2006-02-06 Thread Andrew Dunstan


Do you have tcl installed on your machine? You need it, and it probably 
needs to be in your path, if you want to use pltcl.


This is not a -hackers question, btw - you should ask this somewhere 
like -general.


cheers

andrew

Márcio A. Sepp wrote:


Hi,

There is something wrong with createlang on my windows system. Please, 
see the output:


C:\Arquivos de programas\PostgreSQL\8.1\bin\createlang.exe pltcl -U 
postgres


test

createlang: language installation failed: ERROR: could not load 
library C:/Arq


uivos de programas/PostgreSQL/8.1/lib/pltcl.dll: NÒo foi possÝvel 
encontrar o m


¾dulo especificado.

C:\

Translating it to English, that is something like: “…/lib/plctl.dll. 
It is not possible to find the specific module”.


Looking the directory, I can see the file plctl.dll:

C:\Arquivos de programas\PostgreSQL\8.1\libdir pltcl*

O volume na unidade C não tem nome.

O número de série do volume é 1442-C8D0

Pasta de C:\Arquivos de programas\PostgreSQL\8.1\lib

06/01/2006 10:59 56.235 pltcl.dll

1 arquivo(s) 56.235 bytes

0 pasta(s) 22.882.418.688 bytes disponíveis

C:\Arquivos de programas\PostgreSQL\8.1\lib

It is important to say that the same instruction to plpgsql run ok. See:

C:\Arquivos de programas\PostgreSQL\8.1\bin\createlang.exe plpgsql 
-U postgr


es test

C:\

I think that there is a problem to install the language plctl.

If can I help with more information, please, email me.

Att.

Márcio A. Sepp



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [ADMIN] Postgres 8.1.x and MIT Kerberos 5

2006-02-06 Thread Bruce Momjian
Mohan K wrote:
 Peter,
 It is chicken and egg problem, I still need to enable kerberos in the
 configure script to make sure proper pieces are picked up. But of course
 the configure script fails :).
 If I provide the relevant patches to configure.in script is that acceptable?
 thanks

Probably if it isn't too ugly.

---


 
 
 On 2/6/06, Peter Eisentraut [EMAIL PROTECTED] wrote:
 
  Mohan K wrote:
   Regarding the configure issue:
The platform is Tru64 Unix 5.1b, the problem I had was we have
   compiled our Kerberos build statically and is installed in a
   directory other than the standard location. The trick adding to LIBS
   did not work as it (krb5support) library needs to come after the
   other libs (is there a way to control that?).
 
  I think what this comes down to is that we don't support static builds
  very well at all.  So you will have to resort to editing
  Makefile.global yourself after configuring.
 
  --
  Peter Eisentraut
  http://developer.postgresql.org/~petere/
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] [Bizgres-general] WAL bypass for INSERT, UPDATE and

2006-02-06 Thread Bruce Momjian

I have split up the TODO items as:

* Allow WAL logging to be turned off for a table, but the table
  might be dropped or truncated during crash recovery [walcontrol]

  Allow tables to bypass WAL writes and just fsync() dirty pages on
  commit.  This should be implemented using ALTER TABLE, e.g. ALTER
  TABLE PERSISTENCE [ DROP | TRUNCATE | DEFAULT ].  Tables using
  non-default logging should not use referential integrity with
  default-logging tables.  A table without dirty buffers during a
  crash could perhaps avoid the drop/truncate.

* Allow WAL logging to be turned off for a table, but the table would
  avoid being truncated/dropped [walcontrol]

  To do this, only a single writer can modify the table, and writes
  must happen only on new pages so the new pages can be removed during
  crash recovery.  Readers can continue accessing the table.  Such
  tables probably cannot have indexes.  One complexity is the handling
  of indexes on TOAST tables.

I think the first one is possible, while the second has many
complexities that make its implementation suspect.

---

Simon Riggs wrote:
 On Fri, 2006-02-03 at 22:29 -0500, Bruce Momjian wrote:
 
  Based on this, I think we should just implement the TRUNCATE/DROP option
  for the table, and avoid the idea of allowing non-logged operations on a
  table that has any data we want recovered after a crash.
 
 Well the only other option is this:
 
 Implement an UNDO at abort like we do for DDL, which would truncate the
 table back down to the starting point. That can be made to work for both
 cases.
 
 In addition if the starting point was  0 then we'd need to perform a
 VACUUM style operation to remove any index pointers with tids into the
 to-be-truncated blocks. That would then make it work for the
 with-indexes and/or with-toast cases.
 
 If starting point == 0 we would just truncate the indexes and toast
 stuff too.
 
 Most importantly we'd need to do this at recovery time. That bit will
 take a bit of work to make it happen right, but seems doable.
 
 So we cover both cases at once, using one lot of logic. But there is a
 fair amount of work there, so I'll need to consider whether its 8.2
 material or not.
 
 Best Regards, Simon Riggs
 
 
 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [Bizgres-general] WAL bypass for INSERT, UPDATE and

2006-02-06 Thread Christopher Kings-Lynne

* Allow WAL logging to be turned off for a table, but the table
  might be dropped or truncated during crash recovery [walcontrol]

  Allow tables to bypass WAL writes and just fsync() dirty pages on
  commit.  This should be implemented using ALTER TABLE, e.g. ALTER
  TABLE PERSISTENCE [ DROP | TRUNCATE | DEFAULT ].  Tables using
  non-default logging should not use referential integrity with
  default-logging tables.  A table without dirty buffers during a
  crash could perhaps avoid the drop/truncate.


This would be such a sweet feature for website session tables...

Chris


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [Bizgres-general] WAL bypass for INSERT, UPDATE and

2006-02-06 Thread Doug McNaught
Bruce Momjian pgman@candle.pha.pa.us writes:

 TABLE PERSISTENCE [ DROP | TRUNCATE | DEFAULT ].  Tables using
 non-default logging should not use referential integrity with
 default-logging tables.

I have to say this smells way too much like MySQL for me to feel
comfortable.  But that's just my opinion.  :)

-Doug

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [Bizgres-general] WAL bypass for INSERT, UPDATE and

2006-02-06 Thread Rick Gigger
I was thinking the exact same thing.  Except  the and just fsync()  
dirty pages on commit part.  Wouldn't that actually make the  
situation worse?  I thought the whole point of WAL was that it was  
more efficient to fsync all of the changes in one sequential write in  
one file rather than fsyncing all of the separate dirty pages.




On Feb 6, 2006, at 7:24 PM, Christopher Kings-Lynne wrote:


* Allow WAL logging to be turned off for a table, but the table
  might be dropped or truncated during crash recovery [walcontrol]

  Allow tables to bypass WAL writes and just fsync() dirty pages on
  commit.  This should be implemented using ALTER TABLE, e.g. ALTER
  TABLE PERSISTENCE [ DROP | TRUNCATE | DEFAULT ].  Tables using
  non-default logging should not use referential integrity with
  default-logging tables.  A table without dirty buffers during a
  crash could perhaps avoid the drop/truncate.


This would be such a sweet feature for website session tables...

Chris


---(end of  
broadcast)---

TIP 2: Don't 'kill -9' the postmaster




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] [Bizgres-general] WAL bypass for INSERT, UPDATE and

2006-02-06 Thread Bruce Momjian
Rick Gigger wrote:
 I was thinking the exact same thing.  Except  the and just fsync()  
 dirty pages on commit part.  Wouldn't that actually make the  
 situation worse?  I thought the whole point of WAL was that it was  
 more efficient to fsync all of the changes in one sequential write in  
 one file rather than fsyncing all of the separate dirty pages.

Uh, supposedly the WAL traffic is not as efficient as fsyncing whole
pages if you are doing lots of full pages.

---


 
 On Feb 6, 2006, at 7:24 PM, Christopher Kings-Lynne wrote:
 
 * Allow WAL logging to be turned off for a table, but the table
   might be dropped or truncated during crash recovery [walcontrol]
 
   Allow tables to bypass WAL writes and just fsync() dirty pages on
   commit.  This should be implemented using ALTER TABLE, e.g. ALTER
   TABLE PERSISTENCE [ DROP | TRUNCATE | DEFAULT ].  Tables using
   non-default logging should not use referential integrity with
   default-logging tables.  A table without dirty buffers during a
   crash could perhaps avoid the drop/truncate.
 
  This would be such a sweet feature for website session tables...
 
  Chris
 
 
  ---(end of  
  broadcast)---
  TIP 2: Don't 'kill -9' the postmaster
 
 
 
 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?
 
http://archives.postgresql.org
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [PORTS] Failed install - libgen.so doesn't exist

2006-02-06 Thread Martijn van Oosterhout
On Mon, Feb 06, 2006 at 04:45:11PM -0500, Chris Browne wrote:
 Further, it appears to be AIX pre-4.3 only, when using it for dlopen()
 replacement...
 
 It would be an attractive idea to have configure detect not whether
 it's open, but rather whether it is needed, and leave it out for AIX
 4.3 and better...

That's kinda the point of these discussions, to answer the question:
what is in those libraries we need? Which symbol did we want? Rather
than trying to detect versions, is there some change in the library
(added or removed symbol) that we can base our decision on?

Does that library (ld) actually provide dlopen() or something else?

Thanks for the info.
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


Re: [HACKERS] slow information schema with thausand users, seq.scan pg_authid

2006-02-06 Thread Pavel Stehule



In the meantime, reducing the LEFT JOIN to pg_roles to a JOIN as per
Peter's suggestion seems like the best short-term workaround.



It's solution

explain analyze SELECT n.nspname as Schema,
 c.relname as Name,
 CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i' THEN 
'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN \'special' END as Type,

 r.rolname as Owner
FROM pg_catalog.pg_class c
JOIN pg_catalog.pg_roles r ON r.oid = c.relowner
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('S','')
 AND n.nspname NOT IN ('pg_catalog', 'pg_toast')
 AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 1,2;

Sort  (cost=22.68..22.68 rows=2 width=193) (actual time=1.047..1.064 rows=5 
loops=1)

  Sort Key: n.nspname, c.relname
  -  Nested Loop Left Join  (cost=1.05..22.67 rows=2 width=193) (actual 
time=0.480..0.983 rows=5 loops=1)

Join Filter: (inner.oid = outer.relnamespace)
Filter: (inner.nspname  ALL ('{pg_catalog,pg_toast}'::name[]))
-  Nested Loop  (cost=0.00..21.34 rows=2 width=133) (actual 
time=0.386..0.642 rows=5 loops=1)
  -  Seq Scan on pg_class c  (cost=0.00..9.29 rows=2 width=73) 
(actual time=0.334..0.431 rows=5 loops=1)
Filter: ((relkind = ANY ('{S,}'::char[])) AND 
pg_table_is_visible(oid))
  -  Index Scan using pg_authid_oid_index on pg_authid  
(cost=0.00..6.01 rows=1 width=68) (actual time=0.02$ 
Index Cond: (pg_authid.oid = outer.relowner)
-  Materialize  (cost=1.05..1.10 rows=5 width=68) (actual 
time=0.007..0.032 rows=5 loops=5)
  -  Seq Scan on pg_namespace n  (cost=0.00..1.05 rows=5 
width=68) (actual time=0.008..0.028 rows=5 loops=1$ Total runtime: 1.294 ms


Regards
Pavel Stehule

_
Najdete si svou lasku a nove pratele na Match.com. http://www.msn.cz/


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [Bizgres-general] WAL bypass for INSERT, UPDATE and

2006-02-06 Thread Rick Gigger

Rick Gigger wrote:

I was thinking the exact same thing.  Except  the and just fsync()
dirty pages on commit part.  Wouldn't that actually make the
situation worse?  I thought the whole point of WAL was that it was
more efficient to fsync all of the changes in one sequential write in
one file rather than fsyncing all of the separate dirty pages.


Uh, supposedly the WAL traffic is not as efficient as fsyncing whole
pages if you are doing lots of full pages.


So then you would want to use this particular feature only when  
updating/inserting large amounts of info at a time then?


For instance if you have a table with rows that are 12 bytes wide  
using this feature would be very bad because it would fsync out a  
whole bunch of extra data just to get those 12 bytes written.  But on  
a table that wrote out several k of data at a time it would help  
because it would be filling up entire pages and not doing any  
wasteful fsyncing?


I guess that probably would help session tables because it would  
avoid writing the data twice.


In the case of session tables though I for one don't care if that  
data is recoverable or not.  If my database just crashed I've  
probably got bigger problems then just dropped sessions.  Would it be  
possible to a) Not WAL log that table, b) not fsync that table at  
all.  Then if that table is in an inconsistent state just recreate  
the table?


I'm guessing that is getting a little too crazy but I have been  
thinking about setting up a separate postgres instance listening on a  
different port with fsync off as a possible way to avoid the fsync  
overhead for the sessions.



-- 
-





On Feb 6, 2006, at 7:24 PM, Christopher Kings-Lynne wrote:


* Allow WAL logging to be turned off for a table, but the table
  might be dropped or truncated during crash recovery [walcontrol]

	  Allow tables to bypass WAL writes and just fsync() dirty  
pages on
	  commit.  This should be implemented using ALTER TABLE, e.g.  
ALTER

  TABLE PERSISTENCE [ DROP | TRUNCATE | DEFAULT ].  Tables using
  non-default logging should not use referential integrity with
  default-logging tables.  A table without dirty buffers during a
  crash could perhaps avoid the drop/truncate.


This would be such a sweet feature for website session tables...

Chris


---(end of
broadcast)---
TIP 2: Don't 'kill -9' the postmaster




---(end of  
broadcast)---

TIP 4: Have you searched our list archives?

   http://archives.postgresql.org



--
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square,  
Pennsylvania 19073


---(end of  
broadcast)---

TIP 5: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings