[HACKERS] Adding probes for smgr
Hi, I'm thinking of adding new probes to trace smgr activities. In this implementation, I just found that md.c has its own probes within it, but I'm wondering why we do not have those probes within the generic smgr routines itself. Which would be a better choice? Any ideas or comments? Regards, -- Satoshi Nagayasu sn...@uptime.jp Uptime Technologies, LLC. http://www.uptime.jp -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] external_pid_file not removed on postmaster exit
From: pgsql-hackers-ow...@postgresql.org [pgsql-hackers-ow...@postgresql.org] on behalf of Peter Eisentraut [pete...@gmx.net] Sent: Friday, July 27, 2012 10:39 AM It seems strange that the external_pid_file is never removed. There is even a C comment about it: /* Should we remove the pid file on postmaster exit? */ I think it should be removed with proc_exit hook just like the main postmaster.pid file. external_pid_file is created first time when it is enabled in postgresql.conf I think it should be removed once the parameter external_pid_file is unset; Making handling of both postmaster.pid and external_pid_file same in terms of creation and removal may not be best choice as both have different purpose. With Regards, Amit Kapila. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Using pg_upgrade on log-shipping standby servers
On Fri, Jul 27, 2012 at 08:29:20AM -0400, Robert Haas wrote: Yes, that would be a problem because the WAL records are deleted by pg_upgrade. Does a shutdown of the standby not already replay all WAL logs? We could also just require them to just start the standby in master mode and shut it down. The problem with that is it might run things like autovacuum. I was originally thinking that we would require users to run pg_upgrade on the standby, where you need to first switch into master mode. OK, sorry, I was confused. You _have_ to run pg_upgrade on the standby --- there are many things we don't preserve, and we need pg_upgrade to move those user file to the right place --- a obvious example is tablespace files. Database oids aren't even preserved, so the data directory changes. These are reasons why you CANNOT run pg_upgrade on the standby, not why you HAVE to. If you run pg_upgrade on the standby and separately on the master, you will end up with divergence precisely because of those things that aren't preserved. Any approach that calls for pg_upgrade to run on the master and standby separately is broken. Basically, you have to run pg_upgrade on the standby so the user data files are moved properly, then you would need to run a copy script that would copy over all the non-user files from the master. Are you worried that the standby, by becoming a master, will write to the standby old cluster user data files in a way that is inconsistent from the master? If so, I think this entire idea can't work. -- Bruce Momjian br...@momjian.ushttp://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] proposal - assign result of query to psql variable
Hello 2012/7/27 Tom Lane t...@sss.pgh.pa.us: Pavel Stehule pavel.steh...@gmail.com writes: 2012/7/26 David Fetter da...@fetter.org: How about \gset var1,,,var2,var3... I don't like this - you can use fake variable - and ignoring some variable has no big effect on client Why assign to a variable you'll never use? so why you get data from server, when you would not to use it ? Yeah. I don't see why you'd be likely to write a select that computes columns you don't actually want. Tom - your proposal release of stored dataset just before next statement, not like now on the end of statement? Huh? I think you'd assign the values to the variables and then PQclear the result right away. yes - I didn't understand \g mechanism well. Here is patch - it is not nice at this moment and it is little bit longer than I expected - but it works It supports David's syntax postgres=# select 'Hello', 'World' \gset a,b postgres=# \echo :'a' :'b' 'Hello' 'World' postgres=# select 'Hello', 'World'; ?column? │ ?column? ──┼── Hello│ World (1 row) postgres=# \gset a to few target variables postgres=# \gset a, postgres=# \echo :'a' 'Hello' Regards Pavel regards, tom lane gset.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Adding probes for smgr
Satoshi Nagayasu sn...@uptime.jp writes: Hi, I'm thinking of adding new probes to trace smgr activities. In this implementation, I just found that md.c has its own probes within it, but I'm wondering why we do not have those probes within the generic smgr routines itself. IMV smgr is pretty vestigial. I wouldn't recommend loading more functionality onto that layer, because it's as likely as not that we'll just get rid of it someday. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] external_pid_file not removed on postmaster exit
Amit kapila amit.kap...@huawei.com writes: I think it should be removed with proc_exit hook just like the main postmaster.pid file. external_pid_file is created first time when it is enabled in postgresql.conf I think it should be removed once the parameter external_pid_file is unset; Unset? If that parameter is not PGC_POSTMASTER, it certainly ought to be. In any case, that has little to do with what Peter is complaining about ... regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Build failures with Mountain Lion
On Jul 27, 2012, at 3:26 PM, Tom Lane t...@sss.pgh.pa.us wrote: Hm. We have seen similar symptoms reported by people using broken openssl installations. I've never tracked down the details but I suspect header-vs-library mismatches. Is it possible there are some pre-ML openssl-related files hanging about on your machine? Sigh. I remember now. Having macports in the path before /usr/ horks up the config vs the libraries. Forgetfully yours, Rob -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Covering Indexes
On Fri, 2012-07-27 at 15:27 -0500, Merlin Moncure wrote: The covering index/uniqueness use case adds legitimacy to the INDEX clause of exclusion constraints IMNSHO. Yes, I think it would be worth revisiting the idea. One point of concern though is that (following a bit of testing) alter table foo add exclude using btree (id with =); ...is always strictly slower for inserts than alter table foo add primary key(id); Yes, in my worst-case tests there is about a 2X difference for building the constraint and about a 30-50% slowdown during INSERT (I thought I remembered the difference being lower, but I didn't dig up my old test). That's for an in-memory case, I would expect disk I/O to make the difference less apparent. This is probably because it doesn't use the low level btree based uniqueness check (the index is not declared UNIQUE) -- shouldn't it do that if it can? We could probably detect that the operator being used is the btree equality operator, set the unique property of the btree, and avoid the normal exclusion constraint check. I'm sure there are some details to work out, but if we start collecting more use cases where people want the flexibility of exclusion constraints and the speed of UNIQUE, we should look into it. Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] effective_io_concurrency
The bitmap heap scan can benefit quite a bit from effective_io_concurrency on RAID system (and to some extent even on single spindle systems) However, the planner isn't aware of this. So you have to just be lucky to have it choose the bitmap heap scan instead of something else that can't benefit from effective_io_concurrency. As far as I can tell, the only thing that drives the bitmap heap scan down in cost is the estimation that you will end up getting multiple tuples from the same block. And because of the fuzzy in compare_path_costs_fuzzily, the estimate has to be 1% of redundant blocks before the bitmap scan will be considered, and I think the benefits of effective_io_concurrency can kick in well before that on very large data sets. Also, if there some correlation in the table, then the situation is worse because the index scan lowers its block-read estimates based on the correlation, while the bitmap scan does not lower its estimate. I haven't witnessed such a case, but it seems like there must be correlation levels small enough that most reading is still scattered, but large enough to make a difference in the cost estimates between the two competing access methods that favor the one that is not actually faster. From my attempted reading of the thread posix_fadvise v22, it seems like modification of the planner was never discussed, rather than being discussed and rejected. So, is there a reason not to make the planner take account of effective_io_concurrency? But it might be better yet to make ordinary index scans benefit from effective_io_concurrency, but even if/when that gets done it would probably still be worthwhile to make the planner understand the benefit. Cheers, Jeff -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SP-GiST for ranges based on 2d-mapping and quad-tree
On 23.07.2012 10:37, Alexander Korotkov wrote: On Fri, Jul 20, 2012 at 3:48 PM, Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote: It would be nice to have an introduction, perhaps as a file comment at the top of rangetypes_spgist.c, explaining how the quad tree works. I have a general idea of what a quad tree is, but it's not immediately obvious how it maps to SP-GiST. What is stored on a leaf node and an internal node? What is the 'prefix' (seems to be the centroid)? How are ranges mapped to 2D points? (the function comment of getQuadrant() is a good start for that last one) I've added some comments at the top of rangetypes_spgist.c. Thanks, much better. I think the handling of empty ranges needs some further explanation. If I understand the code correctly, the root node can contain a centroid like usual, and empty ranges are placed in the magic 5th quadrant. Alternatively, the root node has no centroid, and contains only two subnodes: all empty ranges are placed under one subnode, and non-empty ranges under the other. It seems it would be simpler if we always stored empty nodes the latter way, with a no-centroid root node, and nodes with a centroid would always only have 4 subnodes. When the first empty range is added to an index that already contains non-empty values, the choose-function could return spgSplitTuple to create a new root node that divides the space into empty and non-empty values. Then again, I don't think it would actually simplify the code much. Handling the 5th quadrant doesn't require much code in spg_range_quad_inner_consistent() as it is. So maybe it's better to leave it the way it is. Or perhaps we should stipulate that a root node with no centroid can only contain empty ranges. When you add the first non-empty range, have choose function return spgSplitTuple, and create a new root node with a centroid, and the empty ranges in the 5th quadrant. In spg_range_quad_inner_**consistent(), if in-hasPrefix == true, ISTM that in all cases where 'empty' is true, 'which' is set to 0, meaning that there can be no matches in any of the quadrants. In most of the case-branches, you explicitly check for 'empty', but even in the ones where you don't, I think you end up setting which=0 if empty==true. I'm not 100% sure about the RANGESTRAT_ADJACENT case, though. Am I missing something? Ops., it was a bug: RANGESTRAT_ADJACENT shoud set which=0 if empty==true, while RANGESTRAT_CONTAINS and RANGESTRAT_CONTAINED_BY not. Corrected. Ok. I did some copy-editing, rewording some comments, and fixing whitespace. Patch attached, hope I didn't break anything. I think the most difficult part of the patch is the spg_range_quad_inner_consistent() function. It's hard to understand how the various strategies are implemented. I started to expand the comments in for each strategy, explaining how each strategy maps to a bounding box in the 2D space, but I'm not sure how much that actually helps. Perhaps it would help to also restructure the code so that you first have a switch-case statement that maps the scan key to bounding box, without worrying about the centroid yet, and then check which quadrants you need to descend to to find points within the box. The adjacent-strategy is more complicated than a simple bounding-box search, though. I'm having trouble understanding how exactly the RANGESTRAT_ADJACENT works. The geometric interpretation of 'adjacent' is that the scan key defines two lines, one horizontal and one vertical. Any points that lie on either of the lines match the query. Looking at the implementation, it's not clear how the code implements the search for along those two lines. Also, I wonder if we really need to reconstruct the previous value in a RANGESTRAT_ADJACENT search. ISTM we only need to remember which of the two lines we are chasing. For example, if you descend to quadrant 2 because there might be a point there that lies on the horizontal line, but we already know that there can't be any points there lie on the vertical line, you only need to remember that, not the whole centroid from the previous level. Does the SP-GiST API require the reconstructed values stored by inner_consistent to be of the correct datatype, or can it store any Datums in the array? -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com diff --git a/src/backend/utils/adt/Makefile b/src/backend/utils/adt/Makefile index c5b0a75..a692086 100644 --- a/src/backend/utils/adt/Makefile +++ b/src/backend/utils/adt/Makefile @@ -30,7 +30,7 @@ OBJS = acl.o arrayfuncs.o array_selfuncs.o array_typanalyze.o \ tsginidx.o tsgistidx.o tsquery.o tsquery_cleanup.o tsquery_gist.o \ tsquery_op.o tsquery_rewrite.o tsquery_util.o tsrank.o \ tsvector.o tsvector_op.o tsvector_parser.o \ - txid.o uuid.o windowfuncs.o xml.o + txid.o uuid.o windowfuncs.o xml.o rangetypes_spgist.o like.o: like.c like_match.c diff --git
Re: [HACKERS] Covering Indexes
On Fri, Jul 27, 2012 at 1:27 PM, Merlin Moncure mmonc...@gmail.com wrote: One point of concern though is that (following a bit of testing) alter table foo add exclude using btree (id with =); ...is always strictly slower for inserts than alter table foo add primary key(id); This is probably because it doesn't use the low level btree based uniqueness check (the index is not declared UNIQUE) -- shouldn't it do that if it can? If it did that, then than would make it faster in precisely those cases were I wouldn't use it in the first place--where there is a less esoteric alternative that does exactly the same thing. While that is not something without value, it would seem better (although potentially more difficult of course) to just make it faster in general, instead. I didn't look into the creation, but rather into inserts. During inserts, it looks like it is doing a look up into the btree twice, presumably once to maintain it, and once to check for uniqueness. If there was some way to cache the look-up between those, I think it would go a long way towards eliminating the performance difference. Could that be done without losing the generality? And, does it matter? I would think covering indexes would be deployed to best effect when your data is not cached in RAM, in which case the IO cost common to both paths probably overwhelms any extra CPU cost. Cheers, Jeff -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SP-GiST for ranges based on 2d-mapping and quad-tree
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes: Also, I wonder if we really need to reconstruct the previous value in a RANGESTRAT_ADJACENT search. ISTM we only need to remember which of the two lines we are chasing. For example, if you descend to quadrant 2 because there might be a point there that lies on the horizontal line, but we already know that there can't be any points there lie on the vertical line, you only need to remember that, not the whole centroid from the previous level. Does the SP-GiST API require the reconstructed values stored by inner_consistent to be of the correct datatype, or can it store any Datums in the array? They have to match the attribute type, at least as to storage details (typbyval/typlen), because the core uses datumCopy to copy them around. We could possibly extend the API to allow a different type to be used for this, but then it wouldn't be reconstructed data in any sense of the word; so I think it'd be abuse of the concept --- which would come back to bite us if we ever try to support index-only scans with SPGiST. ISTM what this points up is that the opclass might want some private state kept around during a tree descent. If we want to support that, we should support it as a separate concept from reconstructed data. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SP-GiST for ranges based on 2d-mapping and quad-tree
On 29.07.2012 00:50, Tom Lane wrote: Heikki Linnakangasheikki.linnakan...@enterprisedb.com writes: Also, I wonder if we really need to reconstruct the previous value in a RANGESTRAT_ADJACENT search. ISTM we only need to remember which of the two lines we are chasing. For example, if you descend to quadrant 2 because there might be a point there that lies on the horizontal line, but we already know that there can't be any points there lie on the vertical line, you only need to remember that, not the whole centroid from the previous level. Does the SP-GiST API require the reconstructed values stored by inner_consistent to be of the correct datatype, or can it store any Datums in the array? They have to match the attribute type, at least as to storage details (typbyval/typlen), because the core uses datumCopy to copy them around. We could possibly extend the API to allow a different type to be used for this, but then it wouldn't be reconstructed data in any sense of the word; so I think it'd be abuse of the concept --- which would come back to bite us if we ever try to support index-only scans with SPGiST. I can see that for leaf nodes, but does that also hold for inner nodes? ISTM what this points up is that the opclass might want some private state kept around during a tree descent. If we want to support that, we should support it as a separate concept from reconstructed data. Yeah, that seems better. The representation of an inner node is datatype-specific, there should be no need to expose reconstructed inner node values outside a datatype's SP-GiST implementation. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] New statistics for WAL buffer dirty writes
On Sat, Jul 7, 2012 at 9:17 PM, Satoshi Nagayasu sn...@uptime.jp wrote: Hi, Jeff Janes has pointed out that my previous patch could hold a number of the dirty writes only in single local backend, and it could not hold all over the cluster, because the counter was allocated in the local process memory. That's true, and I have fixed it with moving the counter into the shared memory, as a member of XLogCtlWrite, to keep total dirty writes in the cluster. A concern I have is whether the XLogCtlWrite *Write pointer needs to be declared volatile, to prevent the compiler from pushing operations on them outside of the locks (and so memory barriers) that formally protect them. However I see that existing code with Insert also does not use volatile, so maybe my concern is baseless. Perhaps the compiler guarantees to not move operations on pointers over the boundaries of function calls? The pattern elsewhere in the code seems to be to use volatiles for things protected by spin-locks (implemented by macros) but not for things protected by LWLocks. The comment XLogCtrlWrite must be protected with WALWriteLock mis-spells XLogCtlWrite. The final patch will need to add a sections to the documentation. Cheers, Jeff -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SP-GiST for ranges based on 2d-mapping and quad-tree
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes: On 29.07.2012 00:50, Tom Lane wrote: We could possibly extend the API to allow a different type to be used for this, but then it wouldn't be reconstructed data in any sense of the word; so I think it'd be abuse of the concept --- which would come back to bite us if we ever try to support index-only scans with SPGiST. I can see that for leaf nodes, but does that also hold for inner nodes? I didn't explain myself terribly well, probably. Consider an opclass that wants some private state like this and *also* needs to reconstruct column data. In principle I suppose we could do away with the reconstructed-data support altogether, and consider that if you need that then it is just a portion of the unspecified private state the opclass is holding. But it's probably a bit late to remove bits of the opclass API. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Adding probes for smgr
On 28 July 2012 17:15, Tom Lane t...@sss.pgh.pa.us wrote: IMV smgr is pretty vestigial. I wouldn't recommend loading more functionality onto that layer, because it's as likely as not that we'll just get rid of it someday. Agreed. I recently found myself reading a paper written by Stonebraker back in the Berkeley days: http://dislab2.hufs.ac.kr/dislab/seminar/2007/ERL-M87-06.pdf This paper appears to have been published in about 1988, and it shows. It's fairly obvious from reading the opening paragraph that the original rationale for the design of the storage manager doesn't hold these days. Of course, it's also obvious from reading the code, since for example there is only one storage manager module. This state of affairs sort of reminds me of mcxt.c . The struct MemoryContextData is described as an abstract type that can have multiple implementations, despite the fact that since 2000 (and perhaps earlier), the underlying type is invariably AllocSetContext. I never investigated if that indirection still needs to exist, but I suspect that it too is a candidate for refactoring. Do you agree? Incidentally, you might consider refreshing these remarks in the smgr README: In Berkeley Postgres each relation was tagged with the ID of the storage manager to use for it. This is gone. It would be more reasonable to associate storage managers with tablespaces (a feature not present as this text is being written, but one likely to emerge soon). -- Peter Geoghegan http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training and Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Adding probes for smgr
Peter Geoghegan pe...@2ndquadrant.com writes: On 28 July 2012 17:15, Tom Lane t...@sss.pgh.pa.us wrote: IMV smgr is pretty vestigial. I wouldn't recommend loading more functionality onto that layer, because it's as likely as not that we'll just get rid of it someday. Agreed. I recently found myself reading a paper written by Stonebraker back in the Berkeley days: http://dislab2.hufs.ac.kr/dislab/seminar/2007/ERL-M87-06.pdf This paper appears to have been published in about 1988, and it shows. It's fairly obvious from reading the opening paragraph that the original rationale for the design of the storage manager doesn't hold these days. Of course, it's also obvious from reading the code, since for example there is only one storage manager module. Yeah. There were actually two storage managers in what we inherited from Berkeley, but we soon got rid of the other one as being useless. (IIRC it was meant for magnetic-core memory ... anybody seen any of that lately?) I think basically what happened since then is that the functionality Stonebraker et al imagined as being in per-storage-manager code all migrated into the kernel device drivers, or even down into the hardware itself. (SSDs are *way* smarter than the average '80s storage device, and even those were an order of magnitude smarter than what they'd been ten years previously. I used to do device drivers back in the 80's...) There's no longer any good reason to have anything but md.c, which isn't so much a magnetic disk interface as an interface to something that has a Unix block device driver. This state of affairs sort of reminds me of mcxt.c . The struct MemoryContextData is described as an abstract type that can have multiple implementations, despite the fact that since 2000 (and perhaps earlier), the underlying type is invariably AllocSetContext. I never investigated if that indirection still needs to exist, but I suspect that it too is a candidate for refactoring. Do you agree? Meh. Having invented the MemoryContext interface, I am probably not the best-qualified person to evaluate it objectively. The original thought was that we might have (a) a context type that could allocate storage in shared memory, and/or (b) a context type that could provide better allocation speed at a loss of storage efficiency (eg, lose the ability to pfree individual chunks). Case (a) has never become practical given the inability of SysV-style shared memory to expand at all. I don't know if that might change when/if we switch to some other shmem API. The idea of a different allocation strategy for some usages still seems like something we'll want to do someday, though. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] external_pid_file not removed on postmaster exit
From: Tom Lane [t...@sss.pgh.pa.us] Sent: Saturday, July 28, 2012 9:46 PM Amit kapila amit.kap...@huawei.com writes: I think it should be removed with proc_exit hook just like the main postmaster.pid file. external_pid_file is created first time when it is enabled in postgresql.conf I think it should be removed once the parameter external_pid_file is unset; Unset? By Unset, I mean to say when the configuration parameter 'external_pid_file' is disabled (#external_pid_file). But if the path/filename is changed to different name across restart of server, it will not be able to delete the previous file. So it will not workout the way I was trying to think. However if it is deleted at proc_exit as suggested by Peter, there will be no problem. The reason why I have thought the file not to get deleted at every proc_exit, is a. Initially I thought it might be un-necessary to delete and re-create the file at server shutdown and start. b. I was not sure if this file has usage only till server is running. With Regards, Amit Kapila. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] access to psql variables from server again
Hello I am returning to topic, that I opened with discussion about enhancing protocol to support access to host variables. There was a second idea about joining host variables and session variables. This can be implemented without protocol enhancing, but it is only one way tool - we are able read host parameters only on start - but we can simply forward changes of session variables to host. I have another idea - just for psql. We can define two commands to copy psql variable to server and from server to psql. This mechanism is very simple and robust. There is not problem with security or some unwanted overhead. \vf hostvar [ sessionvar ] -- variable forward \vl hostvar [ sessionvar ] -- variable load Name of session variable is optional, when is not specified, then host.hostvar is used. Ideas, comments? Regards Pavel -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers