Re: [Lsf-pc] [HACKERS] Linux kernel impact on PostgreSQL performance

2014-01-24 Thread Andreas Pflug
Am 23.01.14 02:14, schrieb Jim Nasby:
 On 1/19/14, 5:51 PM, Dave Chinner wrote:
 Postgres is far from being the only application that wants this; many
 people resort to tmpfs because of this:
 https://lwn.net/Articles/499410/
 Yes, we covered the possibility of using tmpfs much earlier in the
 thread, and came to the conclusion that temp files can be larger
 than memory so tmpfs isn't the solution here.:)

 Although... instead of inventing new APIs and foisting this work onto
 applications, perhaps it would be better to modify tmpfs such that it
 can handle a temp space that's larger than memory... possibly backing
 it with X amount of real disk and allowing it/the kernel to decide
 when to passively move files out of the in-memory tmpfs and onto disk.

This is exactly what I'd expect from a file system that's suitable for
tmp purposes. The current tmpfs better should have been named memfs or
so, since it lacks the dedicated disk backing storage.

Regards,
Andreas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Prettification versus dump safety

2009-11-20 Thread Andreas Pflug

Tom Lane wrote:

In testing the TRIGGER WHEN patch, I notice that pg_dump is relying on
pg_get_triggerdef(triggeroid, true) (ie, pretty mode) to dump
triggers.  This means that trigger WHEN conditions will be dumped
without adequate parenthesization to ensure they are interpreted the
same way when loaded into future PG versions.  That's not acceptable.

The easy ways out of this are
(1) change pg_dump to not prettify trigger definitions at all, or
(2) change pg_get_triggerdef from the submitted patch so that it
doesn't reduce parenthesization even in pretty mode.
  

The pretty option was explicitely never intended for pg_dump use.
When pg_dump is used to create a future version proof dump, it shouldn't 
use the pretty option, when used to create some schema scripts for 
exernal editing it may be enabled. I propose to invent a cmd line option 
for that (maybe as prettified plain text output format).


Regards,
Andreas


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pretty print viewdefs

2009-08-26 Thread Andreas Pflug
Andrew Dunstan wrote:


 But Pg
 should have some pretty print function - it is easy implemented there.
 Personally, I prefere Celko's notation, it is little bit more compact

 SELECT  sh.shoename, sh.sh_avail, sh.slcolor, sh.slminlen,
   sh.slminlen * un.un_fact AS slminlen_cm, sh.slmaxlen,
   sh.slmaxlen * un.un_fact AS slmaxlen_cm, sh.slunit
FROM shoe_data sh, unit un
   WHERE sh.slunit = un.un_name;

 but, sure - this is my personal preference.
   


 To do that we would need to keep track of how much space was used on
 the line and how much space what we were adding would use. It's
 doable, but it's a lot more work.

When initially implementing the pretty option, I ran into the same
consideration. Back then, I decided not to try any line breaking on the
column list. Instead, I treated the columns as just a bunch of
columns, laying the emphasis on the from-clause (with potentially many
joined tables).
So a pretty column formatting should still be white-space saving.

Regards,
Andreas

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean shutdown and warm standby

2009-05-28 Thread Andreas Pflug
Simon Riggs wrote:

 No, because as I said, if archive_command has been returning non-zero
 then the archive will be incomplete.
   
 Yes. You think that's wrong? How would you like it to behave, then? I 
 don't think you want the shutdown to wait indefinitely until all files 
 have been archived if there's an error.
 

 The complaint was that we needed to run a manual step to synchronise the
 pg_xlog directory on the standby. We still need to do that, even after
 the patch has been committed because 2 cases are not covered, so what is
 the point of the recent change? It isn't enough. It *might* be enough,
 most of the time, but you have no way of knowing that is the case and it
 is dangerous not to check.
   
If archiving has stalled, it's not a clean shutdown anyway and I
wouldn't expect the wal archive to be automatically complete. I'd still
appreciate a warning that while the shutdown appeared regular, wal
wasn't written completely. But the corner case of shutting down a
smoothly running server, the wal archive archive should be complete as well.

Regards,
Andreas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New trigger option of pg_standby

2009-05-13 Thread Andreas Pflug

Andrew Dunstan wrote:
 
We're in Beta. You can't just go yanking stuff like that. Beta testers 
will be justifiably very annoyed.


Please calm down.

pg_standby is useful and needs to be correct. And its existence as a 
standard module is one of the things that has made me feel confident 
about recommending people to use the PITR stuff. I'll be very annoyed 
if it were to get pulled.


Since mentioned in the docs, I consider it at least the semi-official 
tool for pgsql PITR handling. But as this discussion reveals, the api is 
flawed, and will not allow guaranteed consistency (whatever pg_standby 
tries) until fixed. While this may not be a bug of the restore_script 
call, the pitr procedure in total is partially broken (in the sense that 
it doesn't provide what most users expect in a secure way) and thus 
needs to be fixed. It seems a fix can't be provided without extending 
the api.


Regards,
Andreas

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean shutdown and warm standby

2009-04-28 Thread Andreas Pflug
Tom Lane wrote:
 Not at all, because the database would be very unhappy at restart
 if it can't find the checkpoint record pg_control is pointing to.
   

So for several weeks now all postings just say how it will _not_ work.
Does this boil down to There's no way to make sure that a graceful
failover won't lose data?

Regards,
Andreas

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean shutdown and warm standby

2009-04-27 Thread Andreas Pflug

Heikki Linnakangas wrote:


No, no crash is involved. Just a normal server shutdown and start:

1. Server shutdown is initiated
2. A shutdown checkpoint is recorded at XLOG point 1234, redo ptr is 
also 1234.
3. A XLOG_SWITCH record is written at 1235, right after the checkpoint 
record.
4. The last round of archiving is done. The partial WAL file 
containing the checkpoint and XLOG_SWITCH record is archived.

5. Postmaster exits.

6. Postmaster is started again. Since the system was shut down 
cleanly, no WAL recovery is done. The WAL insert pointer is 
initialized to right after the redo pointer, location 1235, which is 
also the location of the XLOG_SWITCH record.
7. The next WAL record written will be written at 1235, overwriting 
the XLOG_SWITCH record.
8. When the WAL file fills up, the system will try to archive the same 
WAL file again, this time with additional WAL records that after the 
checkpoint record. 


So to get this down to a solution, it appears to be correct to execute 
the RequestXLogSwitch right before CreateCheckPoint?



Regards,
Andreas


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New trigger option of pg_standby

2009-04-21 Thread Andreas Pflug
Fujii Masao wrote:
 Hi,

 On Tue, Apr 21, 2009 at 8:28 PM, Heikki Linnakangas
 heikki.linnakan...@enterprisedb.com wrote:
   
 Simon Riggs wrote:
 
 If you do this, then you would have to change the procedure written into
 the 8.3 docs also. Docs aren't backpatchable.

 What you propose is *better* than raw pg_standby is now, but still not
 enough in all cases, as I think you know.
   
 No, I don't. What is the case where it doesn't work?
 

 It's the case which I described as the 2nd comment to your
 proposal.

 1. pg_standby tries to restore a non-existent file
 1-1. remove the trigger file
 1-2. pg_standby exits with non-zero code
 2. the startup process tries to read it from pg_xlog
 2-1. it is applied
 3. the startup process tries to restore the next file using pg_standby
   
I'm a little confused. After pg_standby returned non-zero as indication
for end-of-recovery, the startup process shouldn't request another file
from pg_standby, right? Which means 3. should never happen (unless the
startup process stalls and restarts, in which case I find it normal that
another trigger required).

Regards,
Andreas

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Warm Standby restore_command documentation

2009-04-20 Thread Andreas Pflug
Heikki Linnakangas wrote:
 Andreas Pflug wrote:
 I've been following the thread with growing lack of understanding why
 this is so hardly discussed, and I went back to the documentation of
 what the restore_command should do (
 http://www.postgresql.org/docs/8.3/static/warm-standby.html )

 While the algorithm presented in the pseudocode isn't dealing too good
 with a situation where the trigger is set while the restore_command is
 sleeping (this should be handled better in a real implementation), the
 code says

 Restore all wal files. If no more wal files are present, stop restoring
 if the trigger is set; otherwise wait for a new wal file.

 Since pg_standby is meant as implementation of restore_command, it has
 to follow the directive stated above; *anything else is a bug*.
 pg_standby currently does *not* obey this directive, and has that
 documented, but a documented bug still is a bug.

 I think you're interpreting the chapter too strongly. The provided
 pseudo-code is just an example of a suitable restore_command, it
 doesn't say that pg_standby behaves exactly like that. 
After reading that chapter, I assumed that pg_standby actually does work
like this, and skipped reading the pg_standby specific doc
The pgsql doc tries hard to give best advice for common situations,
especially for integrity and safety issues. IMHO it's best to have the
warm-standby chapter as reference how things should work for typical
use-cases.

Regards,
Andreas




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Warm Standby restore_command documentation (was: New trigger option of pg_standby)

2009-04-14 Thread Andreas Pflug
I've been following the thread with growing lack of understanding why
this is so hardly discussed, and I went back to the documentation of
what the restore_command should do (
http://www.postgresql.org/docs/8.3/static/warm-standby.html )

While the algorithm presented in the pseudocode isn't dealing too good
with a situation where the trigger is set while the restore_command is
sleeping (this should be handled better in a real implementation), the
code says

Restore all wal files. If no more wal files are present, stop restoring
if the trigger is set; otherwise wait for a new wal file.

Since pg_standby is meant as implementation of restore_command, it has
to follow the directive stated above; *anything else is a bug*.
pg_standby currently does *not* obey this directive, and has that
documented, but a documented bug still is a bug.

Conclusion: There's no new trigger option needed, instead pg_standby
has to be fixed so it does what the warm standby option of postgres
needs. The trigger is only to be examined if no more files are
restorable, and only once.

Regards,
Andreas

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] migrate data 6.5.3 - 8.3.1

2008-08-15 Thread Andreas Pflug

alexander lunyov wrote:

Guillaume Smet wrote:

I want to try new pg_dump to connect to old server, but i can't - old
postgres doesn't listening to network socket. Why postgres 6.5.3 not
binding to network socket? It started with this line: 


Maybe you should just dump schema and data separately with your old 
pg_dump tool, then rework the schema for 8.3 manually.


Regards,
Andreas


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] migrate data 6.5.3 - 8.3.1

2008-08-15 Thread Andreas Pflug

alexander lunyov wrote:

Andreas Pflug wrote:

I want to try new pg_dump to connect to old server, but i can't - old
postgres doesn't listening to network socket. Why postgres 6.5.3 not
binding to network socket? It started with this line: 


Maybe you should just dump schema and data separately with your old 
pg_dump tool, then rework the schema for 8.3 manually.


I can do this, but i don't know how to rework it.
I wonder if you need these self defined aggregates at all, most or all 
of them are in 8.3 already.


Regards,
Andreas


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Overhauling GUCS

2008-06-06 Thread Andreas Pflug

David E. Wheeler wrote:


How about a simple rule, such as that machine-generated comments start 
with ##, while user comments start with just #? I think that I've 
seen such a rule used before. At any rate, I think that, unless you 
have some sort of line marker for machine-generated comments, there 
will be no way to tell them apart from user comments. 


Two heretical questions:
Do we need user generated comments at all?
I can't remember ever having used any comment in postgresql.conf.

Why do so many people here insist on editing postgresql.conf as primary 
means of changing config params?
Isn't a psql -c SET foo=bar; MAKE PERSISTENT just as good as sed'ing 
postgresql.conf or doing it manually?



Looking around for different approaches, network appliances come to my 
mind, e.g. Cisco routers and PIX. You have 3 ways to configure a pix:
- use a command line (using ssh or telnet, eqivalent to psql); WRITE 
MEMORY to make the changes survive a reboot.

- use a web interface (or similar tool)
- use tftp to up/download the complete config in and out, editing the 
file. User comments will be lost, with the exception of those that have 
been applied with special comment commands (equivalent to comment on).




Regards,
Andreas



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Overhauling GUCS

2008-06-06 Thread Andreas Pflug

Gregory Stark wrote:

Andreas Pflug [EMAIL PROTECTED] writes:

  

Why do so many people here insist on editing postgresql.conf as primary means
of changing config params?
Isn't a psql -c SET foo=bar; MAKE PERSISTENT just as good as sed'ing
postgresql.conf or doing it manually?



no, it's awful.

  


So I wonder why you accept it when configuring schemas. What's the big 
difference between setting a config param, and creating a table?


And ultimately, the config param file format may well look like an SQL 
command file, restricted to SET only.





And in every major installation I've seen people use the last option. They
treat the original text file which is kept elsewhere -- normally checked into
some revision control system, tracked and managed like source code -- as the
canonical and authoritative version.
  


That's how you'd have to manage the schema sources too, no? Your 
comments are lost as well after schema creation scripts are executed, 
and manual changes may interfere with that.


Regards,
Andreas




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Overhauling GUCS

2008-06-06 Thread Andreas Pflug

Gregory Stark wrote:

So all you have is our existing file except with an additional layer of
quoting to deal with, a useless SET keyword to annoy users, and a file that
you need a bison parser 
Don't you think that's a little over the top, throwing bison at the 
simple task to extend postgresql.conf scanning so it accepts --, /**/ 
and SET?



to deal instead of a simple keyword-value syntax that
sed can manipulate.

  
sed seems to be THE killer application... Apart from the fact that sed 
easily could identify SET, my answer would be to use psql to modify the 
config, not sed.


Text config files are NOT friendly for beginner and mediocre users. IMHO 
the current restriction on GUC changes is a major obstacle towards pgsql 
tuning tools, e.g. written as a Google SoC project. Graphic tools aren't 
too popular at pgsql-hackers, but please contemplate a little how much 
pgadmin may have contributed to the pgsql usage boost, esp. on windows.


Regards,
Andreas




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Overhauling GUCS

2008-06-06 Thread Andreas Pflug

Gregory Stark wrote:
  

Text config files are NOT friendly for beginner and mediocre users. IMHO the
current restriction on GUC changes is a major obstacle towards pgsql tuning
tools, e.g. written as a Google SoC project. Graphic tools aren't too popular
at pgsql-hackers, but please contemplate a little how much pgadmin may have
contributed to the pgsql usage boost, esp. on windows.



Like it or not computers actually have to store state when you're done
entering it via the GUI. If you design around the GUI you end up with system
that can *only* be used via a GUI and spend years trying to work around that
(witness Windows which is only now with a lot of effort recovering from that
mistake).
  


I never advocated a file format that isn't editable any more; au 
contraire. And the statement that a GUI configuration contradicts manual 
editing is plain wrong, even with most windows software (if written 
nicely to the registry, you could dump the key, edit and reload it, or 
use regedit).

OTOH, you can't make most windows users happy with a text file version only.

Regards,
Andreas


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Overhauling GUCS

2008-06-06 Thread Andreas Pflug

Tom Lane wrote:

I grow weary of this thread.  I will say it once more: I do not believe
for one instant that the current formatting of postgresql.conf is the
major impediment, or even a noticeable impediment, to producing a useful
configuration wizard.  If you wish to prove otherwise, provide a
complete wizard except for the parts that touch the config file, and
I will promise to finish it.

I will not read or respond to any further discussion of changing the
config file format.  It's a waste of bandwidth.
  


Your statement doesn't really surprise me. Apparently you kind of 
celebrate misunderstanding my point, which isn't primarily about the 
file format, but about config param accessibility via api/SQL.


I personally wouldn't even think about starting such a wizard, unless I 
have an idea how to push the result into the database. No, not a file, 
but via SQL! So your statement you won't react unless a wizard is almost 
ready is prohibitive, apart from the fact that not only wizards 
(featuring AI) are interesting, but simple config tools as well.


Regards,
Andrads


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Overhauling GUCS

2008-06-06 Thread Andreas Pflug
Gregory Stark wrote:
 Andreas Pflug [EMAIL PROTECTED] writes:
 
 I personally wouldn't even think about starting such a wizard, unless I have 
 an
 idea how to push the result into the database. No, not a file, but via SQL! 
 So
 your statement you won't react unless a wizard is almost ready is 
 prohibitive,
 apart from the fact that not only wizards (featuring AI) are interesting, but
 simple config tools as well.
 
 Well there's a perfectly good place to start today. Dump out a config file

I think I made my point very clear when stating not a file, but via
SQL. Though I'm not a native English speaker, and I'm sure you
understood. I must assume you're polluting this thread deliberately in
order to sabotage the original intention of this thread. I find this
disgusting.

Ok, trying to contribute gui tools for pgsql is for masochists. We have
vi, sed and grep, no more tools required, right?

Regards,
Andreas

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Overhauling GUCS

2008-06-05 Thread Andreas Pflug

Greg Smith wrote:

On Thu, 5 Jun 2008, Magnus Hagander wrote:

We really need a proper API for it, and the stuff in pgAdmin isn't 
even enough to base one on.


I would be curious to hear your opinion on whether the GUC overhaul 
discussed in this thread is a useful precursor to building such a 
proper API.


Since I'm the guy who initially wrote that config file editing stuff, I 
feel somehow addressed.

The answer is a clear ABSOLUTELY.

- The current implementation is able to edit the file directly or 
through pgsql functions; any format change will affect that function 
immediately.
- If documentation is enhanced by adding more comments in the 
postgresql.conf file, this won't help the editor because it can't rely 
on it to present help and hints to the user. It needs the comments/help 
in pg_settings or alike.



Regards,
Andreas


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Overhauling GUCS

2008-06-04 Thread Andreas Pflug

Decibel! wrote:


There's no reason that the server has to deal with a text file. I
completely agree that there must be a method to change settings even if
the database isn't running, but that method does not necessarily need to
be a text file. If we can come up with a standard API for reading and
writing config changes, we (or anyone else) can write any number of
tools to deal with the settings. And once we have an API, we can provide
a SQL interface on top of it.
Once in a lifetime, a man should plant a tree, father a child, and write 
an editor... :-)
Hiding the storage of config parameters opaquely behind an API is 
something I've been hating for a long time on win32.


When reading this thread, I'm wondering if anybody ever saw a config 
file for a complex software product that was easily editable and 
understandable. I don't know one. If there was one, it'd be nice to know 
it so we can learn from it.


IMHO the best compromise in machine and human readability is an XML 
format. It's easily decorateable with comments, easily interpreted and a 
pg_settings view could enhance it with even more comments, so an editor 
using pgsql functions (to read and write postresql.conf.xml) could be 
enabled to supply comprehensive help.


Regards,
Andreas





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Overhauling GUCS

2008-06-04 Thread Andreas Pflug

Aidan Van Dyk wrote:

* Andreas Pflug [EMAIL PROTECTED] [080604 10:20]:

  
Hiding the storage of config parameters opaquely behind an API is 
something I've been hating for a long time on win32.



;-)

  
When reading this thread, I'm wondering if anybody ever saw a config 
file for a complex software product that was easily editable and 
understandable. I don't know one. If there was one, it'd be nice to know 
it so we can learn from it.



PostreSQL, Apache, X.org

They are all easily editable, and understandable, in the sense that I
understand that I'm supposed to edit the line, changing the value
(following the comments list of accepted values)

They are less understandable if you mean that I know the implications
of any change I make.  But guess what, having those values inputed
through some other mechanism (like a GUI config file editor, a SQL statement,
or a nice pgadmin-SQL-hiding-interface isn't going to change that part
of understandable.  That part of understandable only comes through
good documentation and reference material, which is universally
applicable to any config method.
  


Right. On the editing side, a column link in pg_settings that can be 
used to construct an URL to postgresql.org/docs/xxx#yyy could help 
creating editors that support the user. Whatever a text config file will 
look like, you need to know exactly which parameter to use and where to 
locate it; even structuring parameters won't help too much for the 
typical starter task I installed pgsql, what to do next.
  
IMHO the best compromise in machine and human readability is an XML 
format. It's easily decorateable with comments, easily interpreted and a 
pg_settings view could enhance it with even more comments, so an editor 
using pgsql functions (to read and write postresql.conf.xml) could be 
enabled to supply comprehensive help.



Well, In my past, I've generally not got around to installing and using
software that reqired me to edit some jumble of XML.  Ya, maybe I'm
lucky.  And since I've got a lot invested in PG, I'ld be forced to of PG
moved to an XML config, but I'ld be forced to kicking and screaming...

I just *know* that I'ld reload/restart postmaster some time, and the
config file wouldn't be quite correct, and I'ld search for 10 minutes
trying to find the extra (or lack) , or missing closing /...  But maybe
most people are better at parsing XML than me.  And that also may be
because I've actively avoided it for so long ;-)
  
Well I'm an XML evangelist either. But the usual commenting out a 
parameter will reset it to default on reload, no? caveat isn't funny 
either, or duplicate parameter settings scattered throughout your file.
This may be avoided by *preferably* editing the parameters through pgsql 
itself; the current postgresql.conf file format isn't too machine write 
friendly (as I know since I wrote the pgadmin config file editor). But 
having a config file that can't be used with simple editors at all is a 
nightmare.


Regards,
Andreas


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Overhauling GUCS

2008-06-04 Thread Andreas Pflug
Tom Lane wrote:

 
 * Can we present the config options in a more helpful way (this is 99%
 a documentation problem, not a code problem)?
 
 * Can we build a configuration wizard to tell newbies what settings
 they need to tweak?


It's certainly one thing to create an initial postgresql.conf from
scratch after some inquiry, but a different level of problems to deal
with when offering to change the settings. IMHO initial creation isn't
enough, users will feel even more left alone if there are no tools
helping them further. I guess most users will start tweaking after the
server is already running for a while, with some config already in place.
That's when file format and/or APIs come into play. Preserving comments
and/or using them in a wizard isn't too easy with the current format.


Regards,
Andreas

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idea: storing view source in system catalogs

2008-05-21 Thread Andreas Pflug

Florian Pflug wrote:


But maybe you could store the whitespace appearing before (or after?) 
a token in the parse tree that is stored for a view. That might not 
allow reconstructing the *precise* statement, but at least the 
reconstructed statement would preserve newlines and indention - which 
probably is the whole reason for wanting to store the original 
statement in the first place, no? I



Not the whole reason. To get a view definition that is more readable, 
the pretty_bool option of pg_get_viewdef already does some newline and 
indent formatting. Not the initial formatting, but Good Enough (TM), I 
believe.


What's really lost is any comment that might have existed in the initial 
source. I previously had the idea to invent comment nodes, but never 
came to implement them.


Regards,
Andreas

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposed Patch - LDAPS support for servers on port 636 w/o TLS

2008-05-05 Thread Andreas Pflug

Tom Lane wrote:

stephen layland [EMAIL PROTECTED] writes:
  

I've written a quick patch against the head branch (8.4DEV, but it also
works with 8.1.3 sources) to fix LDAP authentication support to
work with LDAPS servers that do not need start TLS.   I'd be interested
to hear your opinions on this.



Not being an LDAP user, I'm not very qualified to comment on the details
here, but ...

  

My solution was to create a boolean config variable called
ldap_use_start_tls which the user can toggle whether or not
start tls is necessary.



... I really don't like using a GUC variable to determine the
interpretation of entries in pg_hba.conf.  A configuration file exists
to set configuration, it shouldn't need help from a distance.  Also,
doing it this way means that if several different LDAP servers are
referenced in different pg_hba.conf entries, they'd all have to have
the same encryption behavior.

I think a better idea is to embed the flag in the pg_hba.conf entry
itself.  Perhaps something like ldapso: instead of ldaps: to
indicate old secure ldap protocol, or include another parameter
in the URL body.
  
With ldaps on port 636 STARTTLS should NEVER be issued, so the protocol 
identifier ldaps should be sufficient as do not issue STARTTLS flag. 
IMHO the current pg_hba.conf implementation doesn't follow the usual 
nomenclatura; ldap with TLS is still ldap. Using ldaps as indicator for 
ldap with tls over port 389 is misleading for anyone familiar with ldap.


Regards,
Andreas


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Truncate Triggers

2008-01-26 Thread Andreas Pflug

Robert Treat wrote:

On Friday 25 January 2008 06:40, Simon Riggs wrote:
  

Notes: As the syntax shows, these would be statement-level triggers
(only). Requesting row level triggers will cause an error. [As Chris
Browne explained, if people really want, they can use these facilities
to create a Before Statement trigger that executes a DELETE, which then
fires row level calls.]




This seems to completly hand-wave away the idea of implementing row level 
visibility in statement level triggers, something I am hoping to see 
implemented somewhere down the line. Am I missing something?


  
The rowset (not row) associated with the statement level trigger would 
be the whole table in case of a TRUNCATE trigger, so in this (corner) 
case it's not too helpful.


Regards,
Andreas



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [HACKERS] Postgresql Materialized views

2008-01-16 Thread Andreas Pflug

Simon Riggs wrote:


My thinking was if you load a 1000 rows and they all have the same key
in your summary table then you'll be doing 1000 updates on a single row.
  
This is true because the statement level triggers are still rudimentary, 
with no OLD and NEW support. A single AFTER statement trigger execution 
could maintain the summary table with much less effort.


Regards,
Andreas


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Locales and Encodings

2007-10-12 Thread Andreas Pflug
Michael Glaesemann wrote:

 On Oct 12, 2007, at 10:19 , Gregory Stark wrote:

 It would make Postgres inconsistent and less integrated with the rest
 of the
 OS. How do you explain that Postgres doesn't follow the system's
 configurations and the collations don't agree with the system
 collations?

 How is this fundamentally different from PostgreSQL using a separate
 users/roles system than the OS?
Even more, eliminating dependencies on a OS's correct implementation of
locale stuff appears A Good Thing to me. I wonder if a compile time
option to use ICU in 8.4 should be considered, regarding all those
lengthy threads about encoding/locale/collation problems.

Regards,
Andreas


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Some questions about mammoth replication

2007-10-11 Thread Andreas Pflug
Alexey Klyukin wrote:

   
 For what use cases do you think your WAL-based approach is better than
 Slony/Skytools trigger-based one ?
 

 A pure trigger based approach can only replicate data for the commands
 which fire triggers. AFAIK Slony is unable to replicate TRUNCATE
 command
It could be wrapped with ddl_script which obviously isn't transparent to
the application, but I guess a table that's truncated regularly won't be
a typical candidate  for (async) replication either.

Regards,
Andreas


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] pipe chunking vs Windows

2007-07-29 Thread Andreas Pflug
Andrew Dunstan wrote:

 I have no idea why that's done - it goes back to the origins of the
 syslogger - probably because someone mistakenly thinks all WIndows
 text files have to have CRLF line endings.

 I tried changing that to _O_BINARY, and calling _setmode on both the
 pipe before it's duped into stderr and stderr after the dup and both.
 Nothing seemed to work.
AFAIR the flag has to be set again in each child process.

Regards,
Andreas

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] pipe chunking vs Windows

2007-07-29 Thread Andreas Pflug
Andrew Dunstan wrote:
  
 I have no idea why that's done - it goes back to the origins of the
 syslogger - probably because someone mistakenly thinks all WIndows
 text files have to have CRLF line endings.
Yes this was intentional, notepad still doesn't like LF line endings.
Not my preferred text viewer, but the only one that's always available.

Regards,
Andreas


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] pipe chunking vs Windows

2007-07-29 Thread Andreas Pflug
Andrew Dunstan wrote:
  
 Not for Wordpad though, and it's pretty universal too. And Notepad
 won't load a file of any great size anyway. Furthermore, we just can't
 have this alongside the pipe chunking protocol, so I'm inclined to
 blow it away altogether, unless there are pretty loud squawks.
 Especially for machine-readable logs, we want the log file to get
 *exactly* what we send it.
Well I'd LOVE reliably machine-readable logs, but I think that will
collide with human-readability.

Regards
Andreas


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Reducing NUMERIC size for 8.3

2007-06-18 Thread Andreas Pflug
Simon Riggs wrote:
 The objections to applying this patch originally were:
 2. it would restrict number of digits to 508 and there are allegedly
 some people that want to store  508 digits.
   
If 508 digits are not enough, are1000 digits be sufficient? Both limits
appear quite arbitrary to me.

Regards,
Andreas


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Reducing NUMERIC size for 8.3

2007-06-18 Thread Andreas Pflug
Tom Lane wrote:
 Andreas Pflug [EMAIL PROTECTED] writes:
   
 Simon Riggs wrote:
 
 The objections to applying this patch originally were:
 2. it would restrict number of digits to 508 and there are allegedly
 some people that want to store  508 digits.

   
 If 508 digits are not enough, are1000 digits be sufficient? Both limits
 appear quite arbitrary to me.
 

 As per the recent discussion about factorial, the current limit of
 numeric format is 10^131071 --- there is a whole lot of daylight between
 that and 10^508.

 I had a thought though: it's possible to reduce the header overhead for
 typical-size numbers without giving up the ability to store large ones.
 This is because the POS/NEG/NAN sign possibilities leave one unused bit
 pattern.  Hence:

 1. Switch the positions of the n_sign_dscale and n_weight fields in the
 long format, so that the sign bits are in the first word.

 2. Reserve the fourth sign bit pattern to denote a compressed-header
 format in which there's just one uint16 header word and the
 NumericDigits start right after that.  The header word could contain:
   2 bits: sign distinguishing this from the two-word-header format
   1 bit: actual number sign (POS or NEG, disallow NaN)
   6 bits: weight, room for -32 .. 31
   7 bits: dscale, room for 0 .. 127

 3. When packing a NumericVar into a Numeric, use this short format when
 it's not a NaN and the weight and dscale are in range, else use the long
 format.

 Since the weight is in base-1 digits, this bit allocation allows a
 dynamic range of about +- 10^127 which fits well with the dscale range.
 But I suspect that most of the use-cases for long numerics involve large
 integers, so it might be more useful to shave another bit or two from
 dscale and give 'em to weight.

 In any case, no capability is lost, unlike the original proposal; and
 this would be much less invasive than the original patch since there's
 no need to play tricks with the content of the digit array.
   

I wonder if the currently waiting patch isn't Good Enough for
999. % of use cases, and all others can use numeric
instead of numeric(1000,800) or so. Especially since there are many
patches waiting that do need further investigation and refining.

Regards,
Andreas


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Eliminating unnecessary left joins

2007-04-06 Thread Andreas Pflug
Tom Lane wrote:
 =?ISO-8859-1?Q?Ott=F3_Havasv=F6lgyi?= [EMAIL PROTECTED] writes:
   
 When using views built with left joins, and then querying against these
 views, there are a lot of join in the plan that are not necessary, because I
 don't select/use any column of each table in the views every time. Tables
 that are left joined and never referenced anywhere else in the query  should
 be removed from the plan.
 

 That might cause you to get the wrong number of copies of some rows ---
 what if a row of the left table should join to multiple rows on the right?
   
That would be trouble. But I've seen quite some cases where the right
can contain only zero or one row, because of PK constraints. In this
case, elimination would be safe.


Regards,
Andreas

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] TOASTing smaller things

2007-03-22 Thread Andreas Pflug
Luke Lonergan wrote:
 I advocate the following:

 - Enable specification of TOAST policy on a per column basis

 As a first step, then:

 - Enable vertical partitioning of tables using per-column specification of
 storage policy.
   
Wouldn't it be enough to enable having the toast table on a different
table space?

Regards,
Andreas

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Interaction of PITR backups and Bulk operationsavoiding WAL

2007-03-09 Thread Andreas Pflug
Tom Lane wrote:
 Simon Riggs [EMAIL PROTECTED] writes:
   
 On Fri, 2007-03-09 at 11:15 -0500, Tom Lane wrote:
 
 It strikes me that allowing archive_command to be changed on the fly
 might not be such a good idea though, or at least it shouldn't be
 possible to flip it from empty to nonempty during live operation.
   

   
 I'd rather fix it the proposed way than force a restart. ISTM wrong to
 have an availability feature cause downtime.
 

 I don't think that people are very likely to need to turn archiving on
 and off on-the-fly.  Your proposed solution introduces a great deal of
 complexity (and risk of future bugs-of-omission, to say nothing of race
 conditions) to solve a non-problem.  We have better things to be doing
 with our development time.
   
So how to do a file based backup without permanent archiving? If
pg_start_backup would turn on archiving temporarily with forcing
archiving all WAL files that contain open transactions, this would be
possible. This is what's requested for sites where PITR isn't needed,
just filesystem level backup. Currently, this can be mimicked somehow by
turning on archiving on-the-fly, hoping that all xactions are in the WAL
archive when pg_start_backup is issued (Simons mail shows how this will
fail).

Regards,
Andreas


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] WSAStartup() in libpq

2007-03-08 Thread Andreas Pflug
Magnus Hagander wrote:

 The easy fix for this is to remove the calls. Which obviously will break
 some client apps. A fairly easy fix for the WSAStartup() call is to have
 a check in the connection functions against a global variable that will
 then make sure to call WSAStartup() the first time it's called.

 That would leave us leaking the WSAStartup() call, but only one per
 application. This is not perfect, but I'm thinking we can maybe live
 with that. 

 If not, perhaps we can have it call WSAStartup() everytime we connect to
 a server, and then WSACleanup() when we shut down that connection with
 PQfinish(). 

Taken from MSDN docs, this seems the recommended solution. After the
first WSAStartup call subsequent calls are cheap because they only
increment a counter.

Regards,
Andreas


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [Monotone-devel] Re: SCMS question

2007-02-23 Thread Andreas Pflug
Chris Browne wrote:
 The trouble is that there needs to be a sufficient plurality in favor
 of *a particular move onwards* in order for it to happen.

 Right now, what we see is:

 - Some that are fine with status quo
 - Some that are keen on Subversion
 - Others keen on Monotone
 - Others considering other options; Darcs, Git, Mercurial, Arch...

 There's no majority there, for sure.  No plurality, either.

 There has been a convulsion of activity surrounding SCM in the last
 couple of years, and I think that the brief trouble that the Linux
 kernel had with Bitkeeper going away has been an *excellent* thing as
 it drew developers to work on the (long languishing) SCM problem.

 It looks as though there is a strong plurality of PostgreSQL
 developers that are waiting for some alternative to become dominant.
 I suspect THAT will never happen.
   
It probably _can_ never happen, because that would have to be a
one-for-all solution, embracing both centric and distributed
repositories, combining contradictionary goals. So the first question to
answer is: Will PostgreSQL continue with a single repository (the
project was managed very successfully this way for a long time), or try
a distributed approach. IMHO facts would quote for a central repository,
which would drastically reduce SCM candidates.

Regards,
Andreas



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Writing triggers in C++

2007-02-14 Thread Andreas Pflug
Tom Lane wrote:
 Jacob Rief [EMAIL PROTECTED] writes:
   
 I tried to write a trigger using C++.
 

 That is most likely not going to work anyway, because the backend
 operating environment is C not C++.  If you dumb it down enough
 --- no exceptions, no RTTI, no use of C++ library --- then it might
 work, 
I can confirm that it does work this way.

Regards,
Andreas


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] -f output file option for pg_dumpall

2007-01-09 Thread Andreas Pflug
Dave Page wrote:
 Andreas Pflug wrote:
   
 Not much function to re-create here, single
 exception is extracting cluster wide data, the -g option, that's why I
 mentioned scripting. But apparently this didn't get into pgadmin svn any
 more, so I need to retract this proposal.
 

 Eh? Your SCRIPT code is still there - or do you mean something else?
   
No GetScript implementation for pgServer.

Regards,
Andreas


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] -f output file option for pg_dumpall

2007-01-08 Thread Andreas Pflug
Jim C. Nasby wrote:


 It might make sense to provide a programmatic interface to pg_dump to
 provide tools like pgAdmin more flexibility. 
Are you talking about pg_dump in a lib? Certainly a good idea, because
it allows better integration (e.g. progress bar). 
 But it certainly doesn't make sense to re-create the dumping logic.

 In terms of integrating pg_dumpall and pg_dump; I don't really care if
 that happens, 
I can't make too much sense of integrating pg_dumpall anywhere. Dumping
a whole cluster is certainly much of a planned job, not an interactive
online one, because its output usually won't be usable except for
disaster recovery. Not much function to re-create here, single
exception is extracting cluster wide data, the -g option, that's why I
mentioned scripting. But apparently this didn't get into pgadmin svn any
more, so I need to retract this proposal.

Regards,
Andreas


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] -f output file option for pg_dumpall

2007-01-05 Thread Andreas Pflug
Dave Page wrote:
 In pgAdmin we use pg_dump's -f option to write backup files. The IO
 streams are redirected to display status and errors etc. in the GUI.

 In order to enhance the interface to allow backup of entire clusters as
 well as role and tablespace definitions, we need to be able to get
 pg_dumpall to write it's output directly to a file in the same way,
 because we cannot redirect the child pg_dump IO streams (which also
 means we may miss errors, but I need to think about that some more).

 As far as I can see, adding a -f option to pg_dumpall should be straight
 forward, the only issue being that we'd need to pass pg_dump an
 additional (undocumented?) option to tell it to append to the output
 file instead of writing it as normal.

 Any thoughts or better ideas?
   
Use pgAdmin's create script funcion on the server.

Regards,
Andreas

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] adminpack and pg_catalog

2006-10-20 Thread Andreas Pflug
Neil Conway wrote:
 Why does adminpack install functions into pg_catalog? This is
 inconsistent with the rest of the contrib/ packages, not to mention the
 definition of pg_catalog itself (which ought to hold builtin object
 definitions). And as AndrewSN pointed out on IRC, it also breaks
 pg_dump.
   
Having pg_dump not saving the function definitions is an intended
behaviour. Actually, this was different with admin80, and restoring a
8.0 backup to a 8.1 server will throw several errors now.
I'd consider installing contrib modules as an act of installation, not
something that backup/restore should perform (finally, pg_restore isn't
able to do so, since it can't provide the dll/lib module).

Regards,
Andreas


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] adminpack and pg_catalog

2006-10-20 Thread Andreas Pflug
Neil Conway wrote:
 On Fri, 2006-10-20 at 05:52 +0100, Dave Page wrote:
   
 The adminpack was originally written and intended to become builtin
 functions
 

 This is not unique to adminpack: several contrib modules might
 eventually become (or have already become) builtins, but adminpack is
 the only module that defines objects in the pg_catalog schema.
   
.. which appears simply pragmatic, taken that it features server
maintenance functions, not functions usually called from user applications.

   
 pg_catalog was used to ensure compatibility in the future
 

 This is again not unique to adminpack. If users install a contrib module
 into a schema that is in their search path, then if the module is
 subsequently moved to pg_catalog, no queries will need to be changed. If
 users install a module into some schema that isn't in their search path
 and use explicit schema references, they are essentially asking for
 their application to break if the object moves to a different schema.
   
Please note that adminpack is intended for administrator's use, and
should be robust to (i.e. not dependent on) search path. We previously
had this dependency in pgadmin, and found it sucks. Putting the stuff in
pg_catalog works as desired and has no negative effects (apart from the
contrib not working after pg_dump/pg_restore if not installed, which is
expected behaviour anyway).

However, adminpack was crippled to the edge of usability for me already,
I'm prepared to see it fade away further (Since there's still no
pg_terminate_backend available which is definitely needed, I regularly
need to install my personal adminpack).

Regards,
Andreas


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Backup and restore through JDBC

2006-09-29 Thread Andreas Pflug
Andrew Dunstan wrote:
 Marlon Petry wrote:


 pg_dump and pg_restore do not need to run on the server machine.
 Why not
 just run them where you want the dump stored?




 But I would need to have installed pg_dump and pg_restore in machine
 client?
 Without having installed pg_dump and pg_restore,how I could make




 You can't. pg_dump in particular embodies an enormous amount of
 knowledge that simply does not exist elsewhere. There is no
 dump/restore API, and there is nothing you can hook up to using JNI,
 AFAIK.
Recently, there was the proposal to extract that knowledge to a library
(making pg_dump itself just a wrapper). This sounds valuable more and
more, is anybody working on this for 8.3?

Regards,
Andreas


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Backup and restore through JDBC

2006-09-29 Thread Andreas Pflug
Tom Lane wrote:
 Andrew Dunstan [EMAIL PROTECTED] writes:
   
 Then after you recover from your head exploding you start devising some 
 sort of sane API ...
 

 That's the hard part.  There is no percentage in having a library if
 it doesn't do anything significantly different from what you could
 accomplish via
   system(pg_dump ...switches);

 What is it you hope to accomplish by having a library, exactly?
 (And don't say more control over the dump process. 
Some more progress feedback would be really nice.
  pg_dump is already
 on the hairy edge of maintainability; we do *not* need to try to deal
 with making it still function correctly after an application programmer
 makes some random intervention in the process.)
   
Agreed. The only sane approach seems to have a single dump function call
(that takes a set of parameters as prepared by command line scanning)
and a set of callbacks that enable api users to do sensible stuff at
different stages of the backup process.

Regards,
Andreas





---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Release Notes: Major Changes in 8.2

2006-09-20 Thread Andreas Pflug
Simon Riggs wrote:
 Zero administration overhead now possible (Alvaro)

   With autovacuum enabled, all required vacuuming will now take place
 without administrator intervention enabling wider distribution of
 embedded databases.
   
This was true for 8.1 already, no?

Regards,
Andreas


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Autovacuum on by default?

2006-09-02 Thread Andreas Pflug
Bruce Momjian wrote:

 Done, because most people will turn autovacuum on, even if it isn't on
 by default.
   
I wonder how many distros will turn on autovacuum as well, making it the
de-facto standard anyway.

Regards,


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Prepared statements considered harmful

2006-08-31 Thread Andreas Pflug
Peter Eisentraut wrote:
 With time, it becomes ever clearer to me that prepared SQL statements are 
 just 
 a really bad idea.  On some days, it seems like half the performance problems 
 in PostgreSQL-using systems are because a bad plan was cached somewhere.  I'd 
 say, in the majority of cases the time you save parsing and planning is 
 irrelevant compared to the possibly disastrous effects of wrong or suboptimal 
 plans.  I wonder if other people have similar experiences.

 I'd wish that we reconsider when and how prepared statements are used.  The 
 JDBC interface and PL/pgSQL are frequently noticed perpetrators, but the 
 problem is really all over the place.

 A couple of actions to consider:

 - Never use prepared statements unless the user has turned them on.  (This is 
 the opposite of the current behavior.)

 - Transparently invalidate and regenerate prepared plans more often.  This 
 could be tied to the transaction count, update activity obtained from the 
 statistics collector, etc.

 - Redefine prepared to mean parsed rather than parsed and planned.

 Each of these or similar changes would only solve a subset of the possible 
 problems.  Possibly, we need more knobs to adjust these things.  But 
 something needs to be done.
   
Not to mention problems with outdated plans after schema changes. Using
views unplanned (replanned) when used in joins could lead to improved
resulting plans (e.g. if the view contains outer joins itself).

Regards,
Andreas

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Prepared statements considered harmful

2006-08-31 Thread Andreas Pflug
Merlin Moncure wrote:
 On 8/31/06, Peter Eisentraut [EMAIL PROTECTED] wrote:
 With time, it becomes ever clearer to me that prepared SQL statements
 are just
 a really bad idea.  On some days, it seems like half the performance
 problems
 in PostgreSQL-using systems are because a bad plan was cached
 somewhere.  I'd
 say, in the majority of cases the time you save parsing and planning is
 irrelevant compared to the possibly disastrous effects of wrong or
 suboptimal
 plans.  I wonder if other people have similar experiences.

 I have to respectfully disagree. I have used them to great effect in
 many of my projects.
Peter doesn't propose to remove prepared statements as such. They are
certainly of great value, if used carefully and specifically, as in your
case. The problems he's addressing stem from plans _implicitly_ created
and stored.
 In the most extreme case, prepared statements can
 provide a 50% reduction or greater in overall query time...this is too
 good a benefit to simply discard.  I worked on converted isam projects
 which would not have been possbile to make efficient without prepared
 statements.   However you are correct that the planner does often
 create wacky plans which can cause disasterous results in some cases.

 My major issue is that you cannot supply hints to the query engine.
I don't believe extending this thread to the we-need-hints issue is a
good idea.

Regards,
Andreas


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] autovacuum causing numerous regression-test failures

2006-08-29 Thread Andreas Pflug
Tom Lane wrote:

 My objection here is basically that this proposal passed on the
 assumption that it would be very nearly zero effort to make it happen.
 We are now finding out that we have a fair amount of work to do if we
 want autovac to not mess up the regression tests, and I think that has
 to mean that the proposal goes back on the shelf until 8.3 development
 starts.  We are already overcommitted in terms of the stuff that was
 submitted *before* feature freeze.
   

Kicking out autovacuum as default is a disaster, it took far too long to
get in the backend already (wasn't it planned for 8.0?).
You discuss this on the base of the regression tests, which obviously
run on installations that do _not_ represent standard recommended
installations. It's required for ages now to have vacuum running
regularly, using cron or so. The regression tests have to deal with that
default situation, in one way or the other (which might well mean this
tables don't need vacuum or this instance doesn't need vacuum). IMHO
blaming autovacuum for the test failures reverses cause and effect.

Missing vacuum was probably a reason for poor performance of many newbie
pgsql installations  (and I must admit that I missed installing the cron
job myself from time to time, though I _knew_ it was needed). As Magnus
already pointed out, all win32 installations have it on by default, to
take them to the safe side. Disabling it for modules a retail user
will never launch appears overreacting.

I can positively acknowledge that disabling autovacuum with a
pg_autovacuum row does work, I'm using it in production.

Regards,
Andreas


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] autovacuum causing numerous regression-test failures

2006-08-29 Thread Andreas Pflug
Tom Lane wrote:
 Andreas Pflug [EMAIL PROTECTED] writes:
   
 Tom Lane wrote:
 
 My objection here is basically that this proposal passed on the
 assumption that it would be very nearly zero effort to make it happen.
   

   
 Kicking out autovacuum as default is a disaster, it took far too long to
 get in the backend already (wasn't it planned for 8.0?).
 

 If it's so disastrous to not have it, why wasn't it even proposed
 until two weeks after feature freeze? 
To me, this proposal was just too obvious, for reasons already discussed
earlier.

Regards,
Andreas


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] autovacuum causing numerous regression-test failures

2006-08-29 Thread Andreas Pflug
Peter Eisentraut wrote:
 Am Dienstag, 29. August 2006 11:14 schrieb Andreas Pflug:
   
 already pointed out, all win32 installations have it on by default, to
 take them to the safe side. Disabling it for modules a retail user
 will never launch appears overreacting.
 

 Well, the really big problem is that autovacuum may be connected to a 
 database 
 when you want to drop it.  (There may be related problems like vacuuming a 
 template database at the wrong time.  I'm not sure how that is handled.)  I 
 think this is not only a problem that is specific to the regression testing 
 but a potential problem in deployment.  I have opined earlier how I think 
 that should behave properly, but we're not going to change that in 8.2.
   
Don't these issues hit a cron scheduled vacuum as well?

Regards,
Andreas


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] integration of pgcluster into postgresql

2006-08-27 Thread Andreas Pflug
Tom Lane wrote:

 My take on all this is that there's no one-size-fits-all replication
 solution, and therefore the right approach is to have multiple active
 subprojects. 
Anybody knowing  a little about the world of replication needs will
agree with you here. Unfortunately, AFAICS pgcluster can't be added as
module as e.g. Slony-I, since it's rather a not-so-small patch to the
pgsql sources. So I wonder if it's possible to provide some
not-too-intrusive hooks in core pgsql, enabling pgcluster to do most of
the work in modules, to have the best of both worlds: core with as few
modifications as possible, and modules extending the operation,
profiting from backend development immediately.

Regards,
Andreas



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] pg_dump versus SERIAL, round N

2006-08-20 Thread Andreas Pflug
Tom Lane wrote:

 Almost everything I just said is already how it works today; the
 difference is that today you do not have the option to drop t1 without
 dropping the sequence, because there's no (non-hack) way to remove the
 dependency.
   
As far as I understand your proposal I like it, but I'd like to insure
that the situation where a sequence is used by multiple tables is
handled correctly. There _are_ databases that reuse a sequence for
multiple serial-like columns, and pgadmin supports this (including a
pg_depend insert, which would need a version dependent fix).

Regards,
Andreas


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] pg_dump versus SERIAL, round N

2006-08-20 Thread Andreas Pflug
Tom Lane wrote:
 If you insist on initially creating the sequence by saying SERIAL for
 the first of the tables, and then saying DEFAULT nextval('foo_seq')
 for the rest, then under both 8.1 and my proposal you'd not be able to
 drop the first table without dropping the sequence (thus requiring you
 to say CASCADE so that the other tables' defaults can be dropped).
 The difference is that I'm proposing a way to decouple the sequence from
 its original owning column and make it into a true freestanding object,
 after which you could drop the first table without losing the sequence and
 the other defaults.
   
For decoupling, you'd require ALTER SEQUENCE ... OWNER BY NONE to be
executed, right?
I basically doubt the concept of a single owner. I'd expect a sequence
to be dropped from cascaded table dropping, if that was the last usage
and dependencies existed. This would probably mean multiple owners.

 Basically the proposed command allows you to convert from the case where
 a sequence was created by SERIAL to the case where it was created
 free-standing, or vice versa.

 The other change is that using an AUTO instead of INTERNAL dependency
 makes it legal to drop the sequence without dropping the column.
   
Sounds fine.

 AFAICS this doesn't disallow anything you could do before, and it
 allows fixing the problems pg_dump is having.  Is there something
 you need it to do that it doesn't do?
   
Sequence cleanup with multiple tables (multiple owners).

Regards,
Andreas


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] pg_terminate_backend

2006-08-18 Thread Andreas Pflug
Magnus Hagander wrote:
 Since I have a stuck backend without client again, I'll have to
 
 kill
 
 -SIGTERM a backend. Fortunately, I do have console access to
 
 that
 
 machine and it's not win32 but a decent OS.


 
 You do know that on Windows you can use pg_ctl to send a pseudo
 SIGTERM to a backend, don't you?
   
 The main issue still is that console access id required, on any OS.
 

 Yeah.
 Though for the Windows case only, we could easily enough make it
 possible to run pg_ctl kill remotely, since we use a named pipe. Does
 this seem like a good or bad idea?
   

Not too helpful. How to kill a win32 backend from a linux workstation?
Additionally, NP requires an authenticated RPC connection. I you're not
allowed to access the console, you probably haven't got sufficient
access permissions to NP as well, or you'd need extra policy tweaking or
so. Nightmarish, just to avoid the easy and intuitive way.

Regards,
Andreas

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [PATCHES] [Patch] - Fix for bug #2558, InitDB failed to run

2006-08-15 Thread Andreas Pflug
Tom Lane wrote:
 Andrew Dunstan [EMAIL PROTECTED] writes:
   
 I am more than somewhat perplexed as to why the NUL device should be a
 security risk ... what are they thinking??
 

 Frankly, I don't believe it; even Microsoft can't be that stupid.
 And I can't find any suggestion that they've done this in a google
 search.  I think the OP is misdiagnosing his problem.
   
An older message suggests that a service pack induced this problem, per
MS. I just tried it as non-admin on a W2K3 machine with recent hotfixes,
and the command dir nul _did_ work for me.
Though neglected, it still sounds like a virus scanner issue to me.

Regards,
Andreas


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [PATCHES] [Patch] - Fix for bug #2558, InitDB failed to run

2006-08-15 Thread Andreas Pflug
Bruce Momjian wrote:
 Andreas Pflug wrote:
 Tom Lane wrote:
 Andrew Dunstan [EMAIL PROTECTED] writes:
   
 I am more than somewhat perplexed as to why the NUL device should be a
 security risk ... what are they thinking??
 
 Frankly, I don't believe it; even Microsoft can't be that stupid.
 And I can't find any suggestion that they've done this in a google
 search.  I think the OP is misdiagnosing his problem.
   
 An older message suggests that a service pack induced this problem, per
 MS. I just tried it as non-admin on a W2K3 machine with recent hotfixes,
 and the command dir nul _did_ work for me.
 Though neglected, it still sounds like a virus scanner issue to me.
 
 Yes, it seems we will need more information on this.  We need someone at
 a win32 command prompt to show us a  nul failure.

OTOH,
what issues might arise if the output is redirected to a legal tmp file?

Regards,
Andreas

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] [PATCHES] [Patch] - Fix for bug #2558, InitDB failed to run

2006-08-15 Thread Andreas Pflug
Tom Lane wrote:
 Andreas Pflug [EMAIL PROTECTED] writes:
   
 what issues might arise if the output is redirected to a legal tmp file?
 

 Well, (1) finding a place to put the temp file, ie a writable directory;
 (2) ensuring the file is removed afterwards; (3) not exposing the user
 to security hazards due to unsafe use of a temp file (ye olde
 overwrite-a-symlink risk).  Perhaps a few more I didn't think of.
   

AFAICS all DEVNULL usages result from redirecting postmaster's output,
which usually goes to $DATADIR/serverlog at runtime. If this would be
used here too, (1) is as safe as any $DATADIR, (2) is as safe as
cleaning up after failure usually is, (3) can't happen because the
directory is checked to be empty before initdb anyway. Additionally,
there's might be cases when a meaningful logfile from initdb is
desirable too. So why no redirection to initlog or so?

Regards,
Andreas



---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] remote query debugging was: Plugins redux

2006-08-09 Thread Andreas Pflug
Tom Lane wrote:
 The other, probably more controversial bit of functionality is that there
 needs to be a way to cause a backend to load a PL plugin shared library
 without any cooperation from the connected client application.  For
 interactive use of a PL debugger it might be sufficient to tell people to
 do LOAD 'plpgsql_debugger' before running their function-to-be-tested,
 but if you're trying to debug some behavior that only manifests in a large
 complicated application, it may be impractical to get the application to
 issue such a command.  
A similar issue applies to plain SQL that's not touching any PL: In the
past, I encountered numerous situations where I'd have liked to take a
peek at the current application's queries (on MSSQL, this can be done
with SQL Profiler), but not have constant statement logging. IMHO it
would be a good idea if

- debugging could be turned on and off on-the-fly, e.g. to skip the app
startup phase. Thus I question the statement GUC variable is sufficient
- the mechnism would cover plain SQL too.

Regards,
Andreas


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] remote query debugging was: Plugins redux

2006-08-09 Thread Andreas Pflug
Tom Lane wrote:

 I'd turn that around: I think you are arguing for a way to change GUC
 settings on-the-fly for a single existing session, without cooperation
 from the client.  

Ok, implemented that way would solve it (partially)
Something like pg_set_guc(pid int4, varname text, value text) would be
fine to set GUC on-the-fly. Could probably be signaled to the target
backend with SIGHUP, but how should the individual parameter be
transmitted, and eventually be retrieved? What about multiple parameters
to be set atomically?

A different aproach: A system table pg_guc, that holds current GUC
settings for each backend.
- on SIGHUP, the backend reload postgresql.conf as usual and writes guc
into pg_guc, unless a config file override flag is set.
- if pg_guc.config_override is set, guc are read from the table instead,
and the flag is reset.
- truncate pg_guc on postmaster start/restart

Regards,
Andreas

PS the non-solved part for me is still that log_statement logging would
still go to the standard log, in a less machine-readable way, mixed with
other backend's data and possibly truncated. But that's a different story.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] 8.2 features status

2006-08-04 Thread Andreas Pflug
Bruce Momjian wrote:

 Right, hence usability, not new enterprise features.
   
I'm not too happy about the label usability.

Ok, maybe postgres gets usable finally by supporting features that
MySQL had for a long time a MySql guy would say.

Regards,
Andreas


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] pg_terminate_backend

2006-08-03 Thread Andreas Pflug
Andrew Dunstan wrote:


 Andreas Pflug wrote:

 Since I have a stuck backend without client again, I'll have to kill
 -SIGTERM a backend. Fortunately, I do have console access to that
 machine and it's not win32 but a decent OS.
  


 You do know that on Windows you can use pg_ctl to send a pseudo
 SIGTERM to a backend, don't you?
The main issue still is that console access id required, on any OS.

Regards,
Andreas


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] pg_terminate_backend

2006-08-03 Thread Andreas Pflug
Tom Lane wrote:
 Andreas Pflug [EMAIL PROTECTED] writes:
   
 utils/adt/misc.c says:
 //* Disabled in 8.0 due to reliability concerns; FIXME someday *//
 Datum
 *pg_terminate_backend*(PG_FUNCTION_ARGS)
 

   
 Well, AFAIR there were no more issues raised about code paths that don't 
 clean up correctly, so can we please
 remove that comment and make the function live finally? 
 

 No, you have that backwards.  The burden of proof is on those who want
 it to show that it's now safe.  The situation is not different than it
 was before, except that we can now actually point to a specific bug that
 did exist, whereas the original concern was just an unfocused one that
 the code path hadn't been adequately exercised.  That concern is now
 even more pressing than it was.
   

If the backend's stuck, I'll have to SIGTERM it, whether there's
pg_terminate_backend or not. Ultimately, if resources should remain
locked, there's no chance except restarting the whole server anyway.
SIGTERM gives me a fair chance (90%) that it will work without restart.

The persistent refusal of supporting the function makes it more painful
to execute, but not less necessary.

Regards,
Andreas


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] pg_terminate_backend

2006-08-03 Thread Andreas Pflug
Tom Lane wrote:
 Andreas Pflug [EMAIL PROTECTED] writes:
   
 Tom Lane wrote:
 
 No, you have that backwards.  The burden of proof is on those who want
 it to show that it's now safe.
   

   
 If the backend's stuck, I'll have to SIGTERM it, whether there's
 pg_terminate_backend or not.
 

 Stuck?  You have not shown us a case where SIGTERM rather than SIGINT
 is necessary or appropriate. 
Last night, I had a long-running query I launched from pgAdmin. It was
happily running and completing on the server (took about 2 hours), and
the backend went back to IDLE. pgAdmin didn't get back a response,
assuming the query was still running. Apparently, the VPN router had
interrupted the connection silently without notifying either side of the
tcp connection. Since the backend is IDLE, there's no query to cancel
and SIGINT won't help. So Stuck for me means a backend *not*
responding to SIGINT.
BTW, there's another scenario where SIGINT won't help. Imagine an app
running wild hammering the server with queries regardless of query
cancels (maybe some retry mechanism). You'd like to interrupt that
connection, i.e. get rid of the backend.

Regards,
Andreas


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] pg_terminate_backend

2006-08-03 Thread Andreas Pflug
Csaba Nagy wrote:
 On Thu, 2006-08-03 at 18:10, Csaba Nagy wrote:
   
 You didn't answer the original question: is killing SIGTERM a backend
 
   ^^^
 Nevermind, I don't do that. I do 'kill backend_pid' without specifying
 the signal, and I'm sufficiently unfamiliar with the unix signal names
 to have confused them. Is a plain kill still dangerous ?
   
SIGTERM is the default kill parameter, so you do exactly what I'm
talking about.

Regards,
Andreas


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] pg_terminate_backend

2006-08-03 Thread Andreas Pflug
Bruce Momjian wrote:


 I am not sure how you prove the non-existance of a bug.  Ideas?
   
Would be worth at least the Nobel prize :-)

Regards,
Andreas



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] pg_terminate_backend

2006-08-03 Thread Andreas Pflug
Csaba Nagy wrote:
 man kill says the default is SIGTERM.
 

 OK, so that means I do use it... is it known to be dangerous ? I thought
 till now that it is safe to use. 
Apparently you never suffered any problems from that; neither did I.

 What about select pg_cancel_backend()
   

That's the function wrapper around kill -SIGINT, which is probably the
way you could safely stop your queries most of the time.


Regards,
Andreas


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[HACKERS] pg_terminate_backend

2006-08-02 Thread Andreas Pflug
Since I have a stuck backend without client again, I'll have to kill -SIGTERM a 
backend. Fortunately, I do 
have console access to that machine and it's not win32 but a decent OS. For 
other cases I'd really really really 
appreciate if that function would make it into 8.2.

utils/adt/misc.c says:

#*ifdef* NOT_USED

//* Disabled in 8.0 due to reliability concerns; FIXME someday *//
Datum
*pg_terminate_backend*(PG_FUNCTION_ARGS)

Well, AFAIR there were no more issues raised about code paths that don't clean 
up correctly, so can we please
remove that comment and make the function live finally? 

Regards,
Andreas



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] GUC with units, details

2006-07-26 Thread Andreas Pflug

Peter Eisentraut wrote:

Bort, Paul wrote:
  

The Linux kernel changed to the standard years ago. And that's just a
few more lines of code than PostgreSQL. (
http://kerneltrap.org/node/340 and others )



For your entertainment, here are the usage numbers from the linux-2.6.17 
kernel:


kilobyte (-i)   82
kibibyte (-i)   2
megabyte (-i)   98
mebibyte (-i)   0
gigabyte (-i)   32
gibibyte (-i)   0

KB  1151
kB  407
KiB 181
MB  3830
MiB 298
GB  815
GiB 17

So I remain unconvinced.

Of course, your general point is a good one.  If there are actually 
systems using this, it might be worth considering.  But if not, then 
we're just going to confuse people.
  
Is it worth bothering about the small deviation, if 1 was meant, but 
10k gives 10240 buffers? Isn't it quite common that systems round config 
values to the next sensible value anyway?


Regards,
Andreas



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Progress bar updates

2006-07-19 Thread Andreas Pflug

Josh Berkus wrote:

Andreas,

  

Some weeks ago I proposed a PROGRESS parameter for COPY, to enable
progress feedback via notices. tgl thinks nobody needs that...



Well, *Tom* doesn't need it.  What mechanism did you propose to make this 
work?
  
Extended the parser to accept that keyword, and emit notices when n 
lines were copied. I found that convenient when transferring a large 
amount of data, to estimate total runtime.
Patch was submitted a while ago to -hackers, together with compression 
that was torn down in a way not suitable to inspire me to continue.


Regards,
Andreas

Regards,
Andreas


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] password is no required, authentication is overridden

2006-07-19 Thread Andreas Pflug

Andrew Dunstan wrote:




It strikes me that this is actually a bad thing for pgadmin3 to be 
doing. It should use its own file, not the deafult location, at least 
if the libpq version is = 8.1. We provided the PGPASSFILE environment 
setting just so programs like this could use alternative locations for 
the pgpass file. Otherwise, it seems to me we are violating the POLS, 
as in the case of this user who not unnaturally thought he had found a 
major security hole.

.pgpass is THE mechanism for storing libpq passwords, so what is wrong?
If the account is assumed insecure, the user shouldn't check store 
password in pgadmin3.

That's a libpq issue, not a pgadmin3 issue.

Regards,
Andreas


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Progress bar updates

2006-07-18 Thread Andreas Pflug

Gregory Stark wrote:

Has anyone looked thought about what it would take to get progress bars from
clients like pgadmin? (Or dare I even suggest psql:)
  


Some weeks ago I proposed a PROGRESS parameter for COPY, to enable 
progress feedback via notices. tgl thinks nobody needs that...


Regards,
Andreas


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] More on inheritance and foreign keys

2006-06-08 Thread Andreas Pflug

Tom Lane wrote:

Zeugswetter Andreas DCP SD [EMAIL PROTECTED] writes:

The solution to the foreign key problem seems easy if I 
modify PostgreSQL implementation and take off the ONLY word 
from the SELECT query, but it's not an option for me, as I'm 




I think that the ONLY was wrong from day one :-(



Well, sure, but until we have an implementation that actually *works*
across multiple tables, it has to be there so that we can at least
consistently support the current single-table semantics.  Until we
have some form of cross-table unique constraint (index or whatever)


I managed uniqueness using normal indexes and ins/upd triggers on all 
child tables:


CREATE OR REPLACE FUNCTION checkchildsunique
  RETURNS trigger AS
$BODY$BEGIN
IF EXISTS (
SELECT 1 FROM foo Master
 WHERE Master.primaryKeyCol = NEW.primaryKeyCol)
THEN
   RAISE EXCEPTION 'Primary Key violation in table % on %',
TG_RELNAME, TG_OP;
END IF;
RETURN NEW;
END;$BODY$ LANGUAGE 'plpgsql'

Shouldn't be too complicated to implement it as internal function.

Regards,
Andreas

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-06-04 Thread Andreas Pflug

Bruce Momjian wrote:


For use case, consider this:

COPY mytable TO '| rsh [EMAIL PROTECTED]  test ';

so you can COPY to another server directly.
  

Why not rsh psql -c \copy foobar to test ?

Regards,
Andreas


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] copy with compression progress n

2006-05-31 Thread Andreas Pflug
I've been playing around with COPYing large binary data, and implemented 
a COMPRESSION transfer format. The server side compression saves 
significant bandwidth, which may be the major limiting factor when large 
amounts of data is involved (i.e. in many cases where COPY TO/FROM 
STDIN/STDOUT is used)
In addition, a progress notification can be enabled using a PROGRESS 
each n lines option.


I tested this with a table, containing 2000 rows with a highly 
compressable bytea column (size 1.4GB, on-disk 138MB). Numbers are as 
follows (8.2 HEAD psql):

pg_dump -a -F c -t  652s, 146MB
\copy TO /dev/null  322s
\copy TO /dev/null binary   24s
\copy TO /dev/null compression  108s
\copy TO /tmp/file binary   55s, 1.4GB
\copy TO /tmp/file compression  108s, 133MB
\copy TO STDOUT binary|gzip -1  69s, 117MB

So using the plain text copy has a large overhead for text data over 
binary formats. OTOH, copying normal rows WITH BINARY may bloat the 
result too. A typical test table gave these numbers:

COPY:   6014 Bytes
BINARY: 15071 Bytes
COMPRESSION:2334 Bytes

The compression (pg_lzcompress) is less efficient than a binary copy 
piped to gzip, as long as the data transfer of 1.4GB from server to 
client isn't limited by network bandwidth. Apparently, pg_lzcompress 
uses 53s to compress to 133MB, while gzip only needs 14s for 117MB. 
Might be worth to have a look optimizing that since it's used in 
tuptoaster. Still, when network traffic is involved, it may be better to 
have some time spent on the server to reduce data (e.g. for Slony, which 
uses COPY to start a replication, and is likely to be operated over 
lines 1GBit/s).


The attached patch implements COPY ... WITH [BINARY] COMPRESSION 
(compression implies BINARY). The copy data uses bit 17 of the flag 
field to identify compressed data.
The PROGRESS n option to throw notices each n lines has a caveat: when 
copying TO STDOUT, data transfer will cease after the first notice was 
sent. This may either mean dont ereport(NOTICE) when COPYing data to 
the client or a bug somewhere.


Regards,
Andreas
Index: src/backend/commands/copy.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/commands/copy.c,v
retrieving revision 1.266
diff -c -r1.266 copy.c
*** src/backend/commands/copy.c 26 May 2006 22:50:02 -  1.266
--- src/backend/commands/copy.c 31 May 2006 08:52:42 -
***
*** 47,53 
  #include utils/memutils.h
  #include utils/relcache.h
  #include utils/syscache.h
! 
  
  #define ISOCTAL(c) (((c) = '0')  ((c) = '7'))
  #define OCTVALUE(c) ((c) - '0')
--- 47,53 
  #include utils/memutils.h
  #include utils/relcache.h
  #include utils/syscache.h
! #include utils/pg_lzcompress.h
  
  #define ISOCTAL(c) (((c) = '0')  ((c) = '7'))
  #define OCTVALUE(c) ((c) - '0')
***
*** 103,114 
--- 103,121 
int client_encoding;/* remote side's 
character encoding */
boolneed_transcoding;   /* client encoding diff 
from server? */
boolencoding_embeds_ascii;  /* ASCII can be non-first byte? 
*/
+ booldo_compress;/* compress data before writing to output */
+ booldo_flush;   /* flush fe_msgbuf to copy target file/pipe */
+ booluse_raw_buf;/* use raw buffered data for CopyGetData */
uint64  processed;  /* # of tuples processed */
+   uint64  progress;   /* progress notice each # 
tuples processed */
+ 
+   MemoryContext oldcontext;
  
/* parameters from the COPY command */
Relationrel;/* relation to copy to or from 
*/
List   *attnumlist; /* integer list of attnums to copy */
boolbinary; /* binary format? */
+   boolcompression;/* binary compressed format? */
booloids;   /* include OIDs? */
boolcsv_mode;   /* Comma Separated Value 
format? */
boolheader_line;/* CSV header line? */
***
*** 153,162 
 * converts it.  Note: we guarantee that there is a \0 at
 * raw_buf[raw_buf_len].
 */
! #define RAW_BUF_SIZE 65536/* we palloc RAW_BUF_SIZE+1 bytes */
char   *raw_buf;
int raw_buf_index;  /* next byte to process */
int raw_buf_len;/* total # of bytes stored */
  } CopyStateData;
  
  typedef CopyStateData *CopyState;
--- 160,170 
 * converts it.  Note: we guarantee that there is a \0 at
 * raw_buf[raw_buf_len].
 */
! #define RAW_BUF_SIZE 65536/* initially, we palloc RAW_BUF_SIZE+1 
bytes */
char   *raw_buf;
int raw_buf_index;  

Re: [HACKERS] copy with compression progress n

2006-05-31 Thread Andreas Pflug

Tom Lane wrote:

Andreas Pflug [EMAIL PROTECTED] writes:

The attached patch implements COPY ... WITH [BINARY] COMPRESSION 
(compression implies BINARY). The copy data uses bit 17 of the flag 
field to identify compressed data.



I think this is a pretty horrid idea, because it changes pg_lzcompress
from an unimportant implementation detail into a backup file format
that we have to support till the end of time.  What happens if, say,
we need to abandon pg_lzcompress because we find out it has patent
problems?

It *might* be tolerable if we used gzip instead,


I used pg_lzcompress because it's present in the backend. I'm fine with 
every other good compression algorithm.



 but I really don't see
the argument for doing this inside the server at all: piping to gzip
seems like a perfectly acceptable solution,


As I said, this hits only if it is possible to pipe the result into gzip 
in a performant way. The issue already arises if psql or any other COPY 
client (slony, pg_dump) is not on the same machine: Network bandwidth 
will limit throughput.



quite possibly with higher
performance than doing it all in a single process (which isn't going
to be able to use more than one CPU).


Which is pretty normal for pgsql.


I don't see the argument for restricting it to binary only, either.


That's not a restriction, but a result: compressed data is binary. 
Marking it as binary will make it working with older frontends as well, 
as long as they don't try to interpret the data. Actually, all 8.x psql 
versions should work (with COPY STDxx, not \copy).


Do you have a comment about the progress notification and its impact on 
copy to stdout?


Regards,
Andreas

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Andreas Pflug

Tom Lane wrote:

After re-reading what I just wrote to Andreas about how compression of
COPY data would be better done outside the backend than inside, it
struck me that we are missing a feature that's fairly common in Unix
programs.  Perhaps COPY ought to have the ability to pipe its output
to a shell command, or read input from a shell command.  Maybe something
like

COPY mytable TO '| gzip /home/tgl/mytable.dump.gz';





(I'm not wedded to the above syntax, it's just an off-the-cuff thought.)

Of course psql would need the same capability, since the server-side
copy would still be restricted to superusers.


Won't help too much, until gzip's output is piped back too, so a 
replacement for COPY .. TO STDOUT COMPRESSED  would be
COPY ... TO '| /bin/gzip |' STDOUT, to enable clients to receive the 
reduced stuff. But clients should be agnostic of server side installed 
tools, and probably not be able to address them directly. Sounds like a 
potential security issue.


Regards,
Andreas

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Andreas Pflug

Andreas Pflug wrote:



Won't help too much, until gzip's output is piped back too, so a 
replacement for COPY .. TO STDOUT COMPRESSED  would be
COPY ... TO '| /bin/gzip |' STDOUT, to enable clients to receive the 
reduced stuff.


Forgot to mention:
COPY COMPRESSED was also meant to introduce a portable format that's 
efficient for both text and binary data. Relying on some external XYZzip 
version seems not too portable to me.


Regards,
Andreas

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Andreas Pflug

Dave Page wrote:
 




-Original Message-
From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of Andreas Pflug

Sent: 31 May 2006 16:41
Cc: Tom Lane; pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Possible TODO item: copy to/from pipe

Andreas Pflug wrote:


Won't help too much, until gzip's output is piped back too, so a 
replacement for COPY .. TO STDOUT COMPRESSED  would be
COPY ... TO '| /bin/gzip |' STDOUT, to enable clients to 


receive the 


reduced stuff.


Forgot to mention:
COPY COMPRESSED was also meant to introduce a portable format that's 
efficient for both text and binary data. Relying on some 
external XYZzip 
version seems not too portable to me.



It does have that advantage. Gzip and others are not particularly
Windows friendly for example.


... as most windows programs are pipe agnostic.

Regards,
Andreas

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] copy progress notification

2006-05-31 Thread Andreas Pflug

Tom Lane wrote:

Andreas Pflug [EMAIL PROTECTED] writes:

Do you have a comment about the progress notification and its impact on 
copy to stdout?



I didn't bother to comment on it because I think it's useless,


It's useful to see anything at all, and to be able to estimate how long 
the whole process will take. People might find it interesting whether 
they should go for a cup of coffee or come better back the next day...


as well as broken for the stdout case. 


I know it's broken, but why? Is using ereport when sending copy data 
illegal by design? If not, it's not the feature that's broken but 
something in cvs HEAD.


Regards,
Andreas

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Andreas Pflug

Joshua D. Drake wrote:


I dislike putting this into the backend precisely because it's trying to
impose a one-size-fits-all compression solution.  Someone might wish to
use bzip2 instead of gzip, for instance, or tweak the compression level
options of gzip.  It's trivial for the user to do that if the
compression program is separate, not trivial at all if it's wired into
COPY.  Also, a pipe feature would have uses unrelated to compression,
such as on-the-fly analysis or generation of data.



It seems that it would be better to have the options within pg_dump 
which would give the most flexibility.


What about all other client tools?

My COPY WITH COMPRESSION is not the same as taking a copy file and 
zipping it; it creates a copy file with BinarySignature that has 
compressed bytes in the data part, thus it can be handled by any client 
app that can stream binary copy files from/to the server.


Regards,
Andreas

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Andreas Pflug

Chris Browne wrote:

[EMAIL PROTECTED] (Andreas Pflug) writes:


Dave Page wrote:


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Andreas
Pflug
Sent: 31 May 2006 16:41
Cc: Tom Lane; pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Possible TODO item: copy to/from pipe

Andreas Pflug wrote:




Won't help too much, until gzip's output is piped back too, so a
replacement for COPY .. TO STDOUT COMPRESSED  would be
COPY ... TO '| /bin/gzip |' STDOUT, to enable clients to


receive the



reduced stuff.


Forgot to mention:
COPY COMPRESSED was also meant to introduce a portable format
that's efficient for both text and binary data. Relying on some
external XYZzip version seems not too portable to me.


It does have that advantage. Gzip and others are not particularly
Windows friendly for example.


... as most windows programs are pipe agnostic.



Shall we make PostgreSQL less powerful because of that?


I never said that. We shall seek solutions that run painless on most 
popular platforms are useful to users.
I wonder if we'd be able to ship gzip with the windows installer, to 
insure proper integration.


Regards,
Andreas

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Andreas Pflug

Dave Page wrote:


 It's not about a primarily GUI based OS not being able to do
 everything a traditionally command line based OS can do on the
 command line, it's about providing a solution that will work on
 either and remain portable. Whilst I agree with your objection to
 using pg_lzcompress,


Well, pg_lzcompress is in the backend for more than 6 years now, strange 
the objections arise now. However, a replacement for it might be a good 
idea, since apparently the fastest gzip algorithm is 3x faster for 10% 
better compression. TOAST write performance would probably profit 
significantly from a better algorithm.


I wonder what other use-cases exist for server side copy filters beyond 
compression.


Regards,
Andreas


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] XLogArchivingActive

2006-05-27 Thread Andreas Pflug

Jim C. Nasby wrote:



Also, regarding needing to place an archiver command in
pg_start_backup_online, another option would be to depend on the
filesystem backup to copy the WAL files, and just let them pile up in
pg_xlog until pg_stop_backup_online. Of course, that would require a
two-step filesystem copy, since you'd need to first copy everything in
$PGDATA, and then copy $PGDATA/pg_xlog after you have that.


Sounds fine. This solves the problem to insure that all required wal 
files are actually copied to the wal archive.


Regards,
Andreas

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Inefficient bytea escaping?

2006-05-27 Thread Andreas Pflug

Tom Lane wrote:

I wrote:


I'm off for a little visit with oprofile...



It seems the answer is that fwrite() does have pretty significant
per-call overhead, at least on Fedora Core 4.  The patch I did yesterday
still ended up making an fwrite() call every few characters when dealing
with bytea text output, because it'd effectively do two fwrite()s per
occurrence of '\' in the data being output.  I've committed a further
hack that buffers a whole data row before calling fwrite().  Even though
this presumably is adding one extra level of data copying, it seems to
make things noticeably faster:


(semi-OT) This recoding seems like a perfect preparation for a third 
COPY format, compressed.




Let me know what this does on your Debian machine ...


Takes a while, need a different kernel booted because the current isn't 
oprofile ready.


Regards,
Andreas

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] XLogArchivingActive

2006-05-26 Thread Andreas Pflug

Tom Lane wrote:

Andreas Pflug [EMAIL PROTECTED] writes:

That's right, but my proposal would implicitely switch on archiving 
while backup is in progress, thus explicitely enabling/disabling 
archiving wouldn't be necessary.



I'm not sure you can expect that to work.  The system is not built to
guarantee instantaneous response to mode changes like that.


Um, as long as xlog writing stops immediate recycling when 
pg_start_backup is executed everything should be fine, since archived 
logs are not expected to be present until pg_stop_backup is done.




The conventional wisdom is that pg_dump files are substantially smaller
than the on-disk footprint ... and that's even without compressing them.
I think you are taking a corner case, ie bytea data, and presenting it
as something that ought to be the design center.


I certainly have an extreme cornercase, since data is highly 
compressible. I won't suggest to replace pg_dump by physical backup 
methods, but disaster recovery may take considerably longer from a dump 
than from filesystem level backup.




Something that might be worth considering is an option to allow pg_dump
to use binary COPY.  I don't think this'd work nicely for text dumps,
but seems like custom- or tar-format dumps could be made to use it.
This would probably be a win for many datatypes not only bytea, and it'd
still be far more portable than a filesystem dump.


I'd really love a copy format that works for binary and text data as 
well, optimally compressed. Initial replication to a new slony cluster 
node uses COPY, and network bandwidth may become the restricting factor. 
Line protocol compression would be desirable for that too, but that's 
another story.



Regards,
Andreas

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] XLogArchivingActive

2006-05-26 Thread Andreas Pflug

Simon Riggs wrote:

On Thu, 2006-05-25 at 17:25 +0200, Andreas Pflug wrote:





Currently, I have to 
edit postgresql.conf and SIGHUP to turn on archiving configuring a 
(hopefully) writable directory, do the backup, edit postgresql.conf and 
SIGHUP again. Not too convenient...



You're doing this for pgAdmin right?


Not yet, just trying to manage a server.



My understanding was that we had the tools now to edit the
postgresql.conf programmatically? 


Seems like its not too convenient to change the way the server operates
to do this, as long as we solve the SIGHUP/postgresql.conf problem. I'm
also not that happy about curtailing people's options on backup either:
if people decided they wanted to have a mixture of isolated on-line
backup (as you suggest), plus active archiving at other times they would
still have the problems you suggest.


Why?
My suggestion is to redefine XLogArchivingActive. Currently, it tests 
for non-null archive_command. I propose

bool XlogArchivingActive()
{
   if (XLogArchiveCommand[0] == 0)
  return false;
   return (XLogPermanentArchive // from GUC
  || OnlineBackupRunning()); // from pg_start_backup
}

The people you mention simply have XLogPermanentActive=true in 
postgresql.conf, delivering the current behaviour.




Not sure what the edit commands are offhand, but we would need the
following program:

- edit postgresql.conf
- pg_reload_conf()
- wait 30
- pg_start_backup('blah')
- backup
- pg_stop_backup()
- unedit postgresql.conf
- pg_reload_conf()

Which could then be wrapped even more simply as

- pg_start_backup_online('blah')
- backup
- pg_stop_backup_online()


Editing postgresql.conf for this is ugly. In addition, 
pg_start_backup_online would need an additional parameter, the (highly 
machine specific) archive_command string. I'd like to see that parameter 
untouched in postgresql.conf.


Regards,
Andreas

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] XLogArchivingActive

2006-05-26 Thread Andreas Pflug

Tom Lane wrote:

Andreas Pflug [EMAIL PROTECTED] writes:


Tom Lane wrote:


I'm not sure you can expect that to work.  The system is not built to
guarantee instantaneous response to mode changes like that.



Um, as long as xlog writing stops immediate recycling when 
pg_start_backup is executed everything should be fine, since archived 
logs are not expected to be present until pg_stop_backup is done.



Wrong.  You forgot about all the *other* behaviors that change depending
on XLogArchivingActive, like whether CREATE INDEX gets archived or
just fsync'd.  I don't think it makes sense for CREATE INDEX to change
that behavior in midstream, even assuming that it noticed the flag
change instantly.


Ok, but how can I recognize whether all running commands have safely 
switched to archiving mode after enabling it, to continue backing up?


Thought a little about your proposal to use a non-copying 
archive_command, since I only want to have a backup of the state the 
cluster had when backup started, but this won't work because all write 
actions that are not appending (truncate, drop) would remove files 
needed for pre-backup state while possibly not backed up yet, thus the 
WAL archive is needed.
Following your proposal, I could redirect archiving to /dev/null while 
not backing up, but how can I make sure that WAL files of transactions, 
open when starting the backup procedure, are written to the wal 
directory, not lost previously? When pg_start_backup() is executed, I'd 
need the archiver to write all hot xlog files again.


Regards,
Andreas

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Inefficient bytea escaping?

2006-05-26 Thread Andreas Pflug

Tom Lane wrote:

Andreas Pflug [EMAIL PROTECTED] writes:


Tom Lane wrote:


Looking at CopySendData, I wonder whether any traction could be gained
by trying not to call fwrite() once per character.  I'm not sure how
much per-call overhead there is in that function.  We've done a lot of
work trying to optimize the COPY IN path since 8.0, but nothing much
on COPY OUT ...



Hm, I'll see whether I can manage to check CVS head too, and see what's 
happening, not a production alternative though.



OK, make sure you get the copy.c version I just committed ...


Here are the results, with the copy patch:

psql \copy 1.4 GB from table, binary:
8.0 8.1 8.2dev
36s 34s 36s

psql \copy 1.4 GB to table, binary:
8.0 8.1 8.2dev
106s95s 98s

psql \copy 6.6 GB from table, std:
8.0 8.1 8.2dev
375s362s290s (second:283s)

psql \copy 6.6 GB to table, std:
8.0 8.1 8.2dev
511s230s238s

INSERT INTO foo SELECT * FROM bar
8.0 8.1 8.2dev
75s 75s 75s

So obviously text COPY is enhanced by 20 % now, but it's still far from 
the expected throughput. The dump disk should be capable of 60MB/s, 
limiting text COPY to about 110 seconds, but the load process is CPU 
restricted at the moment.


For comparision purposes, I included the in-server copy benchmarks as 
well (bytea STORAGE EXTENDED; EXTERNAL won't make a noticable 
difference). This still seems slower than expected to me, since the 
table's on-disk footage is relatively small (138MB).


Regards,
Andreas

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Inefficient bytea escaping?

2006-05-26 Thread Andreas Pflug

Tom Lane wrote:

Andreas Pflug [EMAIL PROTECTED] writes:


Here are the results, with the copy patch:




psql \copy 1.4 GB from table, binary:
8.0 8.1 8.2dev
36s 34s 36s




psql \copy 6.6 GB from table, std:
8.0 8.1 8.2dev
375s362s290s (second:283s)



Hmph.  There's something strange going on on your platform (what is it
anyway?)


Debian 2.6.26.


 It's interesting (and surprising) that the runtime is
actually less for psql \copy than for server COPY.  This is a dual Xeon
machine, maybe the frontend copy provides more scope to use both CPUs?


The dual CPU explanation sounds reasonable, but I found the same 
tendency on a single 3GHz (HT disabled).

Strange observation using top:
user 90%, sys 10%, idle+wait 0% but only postmaster consumes cpu, 
showing 35%, the rest neglectable.


It would be interesting to see what's happening on your machine with
oprofile or equivalent.


I'll investigate further, trying to find the missing CPU.

Regards,
Andreas

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[HACKERS] XLogArchivingActive

2006-05-25 Thread Andreas Pflug
Currently, WAL files will be archived as soon as archive_command is set. 
 IMHO, this is not desirable if no permanent backup is wanted, but only 
scheduled online backup because; it will flood the wal_archive 
destination with files that will never be used.


I propose to introduce a GUC permanent_archiving or so, to select 
whether wal archiving happens permanently or only when a backup is in 
progress (i.e. between pg_start_backup and pg_stop_backup).



Regards,
Andreas

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] XLogArchivingActive

2006-05-25 Thread Andreas Pflug

Tom Lane wrote:

Andreas Pflug [EMAIL PROTECTED] writes:

I propose to introduce a GUC permanent_archiving or so, to select 
whether wal archiving happens permanently or only when a backup is in 
progress (i.e. between pg_start_backup and pg_stop_backup).



This is silly.  Why not just turn archiving on and off?


Not quite. I want online backup, but no archiving. Currently, I have to 
edit postgresql.conf and SIGHUP to turn on archiving configuring a 
(hopefully) writable directory, do the backup, edit postgresql.conf and 
SIGHUP again. Not too convenient...


Regards,
Andreas

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] XLogArchivingActive

2006-05-25 Thread Andreas Pflug

Tom Lane wrote:

Andreas Pflug [EMAIL PROTECTED] writes:


Tom Lane wrote:


This is silly.  Why not just turn archiving on and off?



Not quite. I want online backup, but no archiving. Currently, I have to 
edit postgresql.conf and SIGHUP to turn on archiving configuring a 
(hopefully) writable directory, do the backup, edit postgresql.conf and 
SIGHUP again. Not too convenient...



You don't get to count the edit/SIGHUP steps, because those would be the
same for any other GUC.


That's right, but my proposal would implicitely switch on archiving 
while backup is in progress, thus explicitely enabling/disabling 
archiving wouldn't be necessary.


AFAICS you could get the effect by setting up an archive_command script
sleep 100
exit 1
so that the archiver will do nothing.


Doesn't WAL expect the WAL files already archived to be recyclable, so 
they could get overwritten in the pg_xlog dir while backup is running? 
Additionally, the doc recommends omitting pg_xlog from the file level 
backup, so a restart would need the archived wal files, no?





BTW, I don't actually understand why you want this at all.  If you're
not going to keep a continuing series of WAL files, you don't have any
PITR capability.  What you're proposing seems like a bulky, unportable,
hard-to-use equivalent of pg_dump.  Why not use pg_dump?


Because pg_dump will take too long and create bloated dump files. All I 
need is a physical backup for disaster recovery purposes without 
bringing down the server.


In my case, I'd expect a DB that uses 114GB on disk to consume 1.4TB 
when pg_dumped, too much for the available backup capacity (esp. 
compared to net content, about 290GB). See other post inefficient bytea 
escaping for details.


Regards,
Andreas

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Inefficient bytea escaping?

2006-05-25 Thread Andreas Pflug

Tom Lane wrote:

Andreas Pflug [EMAIL PROTECTED] writes:

When dumping the table with psql \copy (non-binary), the resulting file 
would be 6.6GB of size, taking about 5.5 minutes. Using psql \copy WITH 
BINARY (modified psql as posted to -patches), the time was cut down to 
21-22 seconds (filesize 1.4GB as expected), which is near the physical 
throughput of the target disk. If server based COPY to file is used, The 
same factor 12 can be observed, CPU is up to 100 % (single P4 3GHz 2MB 
Cache HT disabled, 1GB main mem).



This is with an 8.0.x server, right?


I've tested both 8.0.5 and 8.1.4, no difference observed.


Testing a similar case with CVS HEAD, I see about a 5x speed difference,
which is right in line with the difference in the physical amount of
data written.


That's what I would have expected, apparently the data is near worst case.

  (I was testing a case where all the bytes were emitted as

'\nnn', so it's the worst case.)  oprofile says the time is being spent
in CopyAttributeOutText() and fwrite().  So I don't think there's
anything to be optimized here, as far as bytea goes: its binary
representation is just inherently a lot smaller.


Unfortunately, binary isn't the cure for all, since copying normal data 
with binary option might bloat that by factor two or so. I wish there 
was a third option that's fine for both kinds of data. That's not only a 
question of dump file sizes, but also of network throughput (an online 
compression in the line protocol would be desirable for this).




Looking at CopySendData, I wonder whether any traction could be gained
by trying not to call fwrite() once per character.  I'm not sure how
much per-call overhead there is in that function.  We've done a lot of
work trying to optimize the COPY IN path since 8.0, but nothing much
on COPY OUT ...


Hm, I'll see whether I can manage to check CVS head too, and see what's 
happening, not a production alternative though.


Regards,
Andreas

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] XLogArchivingActive

2006-05-25 Thread Andreas Pflug

Jim Nasby wrote:

On May 25, 2006, at 11:24 AM, Andreas Pflug wrote:

BTW, I don't actually understand why you want this at all.  If you're
not going to keep a continuing series of WAL files, you don't have any
PITR capability.  What you're proposing seems like a bulky, unportable,
hard-to-use equivalent of pg_dump.  Why not use pg_dump?


Because pg_dump will take too long and create bloated dump files. All 
I need is a physical backup for disaster recovery purposes without 
bringing down the server.


In my case, I'd expect a DB that uses 114GB on disk to consume 1.4TB 
when pg_dumped, too much for the available backup capacity (esp. 
compared to net content, about 290GB). See other post inefficient 
bytea escaping for details.


Another consideration is that you can use rsync to update a 
filesystem-level backup, but there's no pg_dump equivalent. On a large 
database that can make a sizable difference in the amount of time 
required for a backup.
That's fine to cut the backup execution time, but to guarantee 
consistency while the cluster is running pg_start_backup/pg_stop_backup 
and WAL archiving will still be necessary.


Regards,
Andreas


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] New feature proposal

2006-05-19 Thread Andreas Pflug

Marc Munro wrote:

Veil http://pgfoundry.org/projects/veil is currently not a very good
Postgres citizen.  It steals what little shared memory it needs from
postgres' shared memory using ShmemAlloc().

For Postgres 8.2 I would like Veil to be a better citizen and use only
what shared memory has been reserved for postgres add-ins.


Why should this be individually restricted? AFAICS Veil's functionality 
would be essential to access row level ACL controlled tables, so if it 
fails for low mem conditions it's much like a backend failure.


Regards,
Andreas

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] [OT] MySQL is bad, but THIS bad?

2006-05-19 Thread Andreas Pflug

Martijn van Oosterhout wrote:


The biggest headache I find with using postgres is that various GPL
licenced programs have trouble directly shipping postgresql support
because of our use of OpenSSL. Each and every one of those program
needs to add an exception to their licence for distributors to
distribute postgresql support.


They could distribute a non-ssl-enabled version, *if* they really need 
to include libpq in the package, or advise to to replace it with the 
common version if ssl is required. I bet 99 % of pgsql connections are 
not encrypted anyway.


Regards,
Andreas

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Google SoC--Idea Request

2006-04-21 Thread Andreas Pflug

Christopher Kings-Lynne wrote:
I think Martin Oosterhout's nearby email on coverity bug reports might 
make a good SoC project, but should it also be added to the TODO list? 



I may as well put up phpPgAdmin for it.  We have plenty of projects 
available in phpPgAdmin...


Same with pgAdmin3.

Regards,
Andreas

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


  1   2   3   4   5   6   >