Re: [HACKERS] Single-argument variant for array_length and friends?
2013/3/21 Tom Lane t...@sss.pgh.pa.us: Brendan Jurd dire...@gmail.com writes: While I was working on my empty array patch I was frequently irritated by the absence of an array_length(anyarray). The same goes for array_upper and array_lower. Most of the time when I work with arrays, they are 1-D, and it's inelegant to having to specify which dimension I mean when there is only one to choose from. The question I have (and would appreciate your input on) is how such single-argument variants should behave when operating on an array with multiple dimensions? I'm not entirely convinced that this is a good idea, but if we're going to allow it I would argue that array_length(a) should be defined as array_length(a, 1). The other possibilities are too complicated to explain in as few words. exactly +1 Pavel regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Single-argument variant for array_length and friends?
On 21 March 2013 17:08, Pavel Stehule pavel.steh...@gmail.com wrote: 2013/3/21 Tom Lane t...@sss.pgh.pa.us: I'm not entirely convinced that this is a good idea, but if we're going to allow it I would argue that array_length(a) should be defined as array_length(a, 1). The other possibilities are too complicated to explain in as few words. exactly +1 Hi Pavel, Is your +1 to array_length(a) being defined as array_length(a,1), or to Tom's being unconvinced by the whole proposal? Or both? Cheers, BJ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Single-argument variant for array_length and friends?
2013/3/21 Brendan Jurd dire...@gmail.com: On 21 March 2013 17:08, Pavel Stehule pavel.steh...@gmail.com wrote: 2013/3/21 Tom Lane t...@sss.pgh.pa.us: I'm not entirely convinced that this is a good idea, but if we're going to allow it I would argue that array_length(a) should be defined as array_length(a, 1). The other possibilities are too complicated to explain in as few words. exactly +1 Hi Pavel, Is your +1 to array_length(a) being defined as array_length(a,1), or to Tom's being unconvinced by the whole proposal? Or both? If I though about it more, I like to more limit one parametric array_length function just for only 1D array. So it is your A use case. But I understand so this variant is not orthogonal. Hard to say, what is better. almost all operations are done on 1D arrays and for multidimensional arrays I don't have a problem with more verbosity (it is better there be more verbose). Probably a most clean solution is Tom's proposal (from API perspective) - with full ANSI/SQL SET implementation. So for 1D operations user can use SET instead 1D arrays. so Tom proposal is cleaner your @a proposal is safer any other opinions??? Regards Pavel Cheers, BJ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Single-argument variant for array_length and friends?
On 21 March 2013 17:32, Pavel Stehule pavel.steh...@gmail.com wrote: If I though about it more, I like to more limit one parametric array_length function just for only 1D array. So it is your A use case. But I understand so this variant is not orthogonal. Hard to say, what is better. Yes, for me (a) is running a very close 2nd place to (c). The strength of (a) is it means we aren't making guesses about the user's intention. When a user concocts an expression that is ambiguous, I feel it is usually good to kick it back to them and ask them to be more precise. On the other hand, I find it very natural to interpret what is the length of my multidim array to mean what is the length of the outermost dimension of my multidim array, because to me a multidim array is just an array that contains more arrays. Cheers, BJ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Single-argument variant for array_length and friends?
2013/3/21 Brendan Jurd dire...@gmail.com: On 21 March 2013 17:32, Pavel Stehule pavel.steh...@gmail.com wrote: If I though about it more, I like to more limit one parametric array_length function just for only 1D array. So it is your A use case. But I understand so this variant is not orthogonal. Hard to say, what is better. Yes, for me (a) is running a very close 2nd place to (c). The strength of (a) is it means we aren't making guesses about the user's intention. When a user concocts an expression that is ambiguous, I feel it is usually good to kick it back to them and ask them to be more precise. On the other hand, I find it very natural to interpret what is the length of my multidim array to mean what is the length of the outermost dimension of my multidim array, because to me a multidim array is just an array that contains more arrays. lot of postgresql functions calculate with all items in array without respect to dimensions - like unnest. so concept use outermost dim is not in pg now, and should not be introduced if it is possible. More it goes against a verbosity concept introduced by ADA and reused in PL/SQL and PL/pgSQL. regards Pavel Cheers, BJ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] SIGHUP not received by custom bgworkers if postmaster is notified
Hi all, While playing with custom background workers, I noticed that postmaster does not notify its registered bgworkers if it receives SIGHUP, so you have to send a SIGHUP directly to the bgworker process to notify it. Signal handling is correctly done for SIGQUIT and SIGTERM for shutdown only. Attached is a patch fixing that, I simply added a call to SignalUnconnectedWorkers in SIGHUP_handler:postmaster.c. Regards, -- Michael 20130321_bgworker_sighup.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [pgsql-advocacy] Call for Google Summer of Code mentors, admins
Magnus Hagander mag...@hagander.net writes: I think mcron already implements it all and is made to be embedded into a larger program. As long as your larger program is gpl. Not even lgpl on that one. I'd think that's a killer for that idea... Oh, are we now talking about including a scheduler in core or contrib? My understanding was that the background worker infrastructure had been made in parts so that we don't even have to talk about a scheduler specs and implementation details on -hackers, where the usual answer is that we already have a system's scheduler anyways. Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [pgsql-advocacy] Call for Google Summer of Code mentors, admins
On 20 March 2013 16:03, Thom Brown t...@linux.com wrote: On 19 March 2013 17:42, Thom Brown t...@linux.com wrote: On 14 February 2013 18:02, Josh Berkus j...@agliodbs.com wrote: Folks, Once again, Google is holding Summer of Code. We need to assess whether we want to participate this year. Questions: - Who wants to mentor for GSOC? - Who can admin for GSOC? Thom? - Please suggest project ideas for GSOC - Students seeing this -- please speak up if you have projects you plan to submit. If anyone else has more projects ideas to suggest, please do share. Students, please feel free to review the PostgreSQL Todo list for inspiration: http://wiki.postgresql.org/wiki/Todo Of course ensure you don't choose anything too ambitious or trivial. Okay, here's a random idea (which could be infeasible and/or undesirable). How about a way to internally schedule tasks using a background worker process (introduced in 9.2) to wake on each tick and run tasks? So: CREATE EXTENSION pg_scheduler; -- schedule_task(task_command, task_priority, task_start, repeat_interval); SELECT schedule_task('REINDEX my_table', 1, '2012-03-20 00:10:00'::timestamp, '1 week'::interval); SELECT list_tasks(); -[ RECORD 1 ]---+--- task_id | 1 task_command| REINDEX my_table task_priority | 1 task_start | 2012-03-20 00:10:00-04 repeat_interval | 7 days owner | postgres SELECT delete_task(1); Tasks would be run in sequence if they share the same scheduled time ordered by priority descending, beyond which it would be non-deterministic. Or perhaps additional worker processes to fire commands in parallel if necessary. Disclaimer: I haven't really thought this through. Here's some evidence for my last statement: custom background worker processes are actually being introduced as part of 9.3, not available in 9.2. I don't think that changes things much though. -- Thom -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [pgsql-advocacy] Call for Google Summer of Code mentors, admins
Atri Sharma atri.j...@gmail.com writes: We can use a scheduling algorithm, and can define a pool of tasks as well as a time constraint for the amount of time which can be used for running the tasks.Then, a scheduling algorithm can pick tasks from the pool based on priorities and the time duration of a task.I can see a dynamic programming solution to this problem. I think mcron already implements it all and is made to be embedded into a larger program. http://www.gnu.org/software/mcron/ I wonder if we can add the domain, something like: SELECT * FROM DOMAINS mydom; Returns {a, b, c, d} Their content. Saludos, Gilberto Castillo La Habana, Cuba --- This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE running at host imx3.etecsa.cu Visit our web-site: http://www.kaspersky.com, http://www.viruslist.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] hstore compiler warnings
Is anyone looking at these?: hstore_io.c: In function ‘hstore_to_json_loose’: hstore_io.c:1312:5: warning: ignoring return value of ‘strtol’, declared with attribute warn_unused_result [-Wunused-result] hstore_io.c:1324:6: warning: ignoring return value of ‘strtod’, declared with attribute warn_unused_result [-Wunused-result] gcc (Ubuntu/Linaro 4.7.2-2ubuntu1) 4.7.2 These warnings seem to have started with: http://git.postgresql.org/gitweb/?p=postgresql.git;h=38fb4d978c5bfc377ef979e2595e3472744a3b05 -- Kevin Grittner EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SIGHUP not received by custom bgworkers if postmaster is notified
On 21-03-2013 05:06, Michael Paquier wrote: While playing with custom background workers, I noticed that postmaster does not notify its registered bgworkers if it receives SIGHUP, so you have to send a SIGHUP directly to the bgworker process to notify it. Signal handling is correctly done for SIGQUIT and SIGTERM for shutdown only. Attached is a patch fixing that, I simply added a call to SignalUnconnectedWorkers in SIGHUP_handler:postmaster.c. Per this discussion [1], it seems it is as is by design. AFAICS controlling when change configuration parameters is a feature not a bug. Alvaro said that will include SIGHUP handle in worker_spi (see [2] for how to process configurantion file). [1] http://www.postgresql.org/message-id/20121231140353.gc4...@alvh.no-ip.org [2] http://www.postgresql.org/message-id/1357210591.1964.22.camel@localhost.localdomain -- Euler Taveira de Oliveira - Timbira http://www.timbira.com.br/ PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: index support for regexp search
On Thu, Mar 14, 2013 at 9:40 PM, Alexander Korotkov aekorot...@gmail.comwrote: On Wed, Jan 23, 2013 at 7:29 PM, Tom Lane t...@sss.pgh.pa.us wrote: Heikki Linnakangas hlinnakan...@vmware.com writes: On 23.01.2013 09:36, Alexander Korotkov wrote: On Wed, Jan 23, 2013 at 6:08 AM, Tom Lanet...@sss.pgh.pa.us wrote: The biggest problem is that I really don't care for the idea of contrib/pg_trgm being this cozy with the innards of regex_t. The only option I see now is to provide a method like export_cnfa which would export corresponding CNFA in fixed format. Yeah, I think that makes sense. The transformation code in trgm_regexp.c would probably be more readable too, if it didn't have to deal with the regex guts representation of the CNFA. Also, once you have intermediate representation of the original CNFA, you could do some of the transformation work on that representation, before building the tranformed graph containing trigrams. You could eliminate any non-alphanumeric characters, joining states connected by arcs with non-alphanumeric characters, for example. It's not just the CNFA though; the other big API problem is with mapping colors back to characters. Right now, that not only knows way too much about a part of the regex internals we have ambitions to change soon, but it also requires pg_wchar2mb_with_len() and lowerstr(), neither of which should be known to the regex library IMO. So I'm not sure how we divvy that up sanely. To be clear: I'm not going to insist that we have to have a clean API factorization before we commit this at all. But it worries me if we don't even know how we could get to that, because we are going to need it eventually. Now I have following idea about API. Put code of stage 2 (transform the original CNFA into an automaton-like graph) into regex engine. It would use API which describes what exactly are we going to extract from CNFA. This API could look like this. typedef char *Prefix; typedef char *ArcLabel; typedef struct { Prefix newPrefix; ArcLabel label; } ArcInfo; typedef struct { Prefix (*getInitialPrefix) (); bool (*prefixContains) (Prefix prefix1, Prefix prefix2); Prefix * (*getPrefixes) (Prefix prefix, color c, int *n); ArcInfo * (*getArcs) (Prefix prefix, color c, int *n); void (*freePrefix) (Prefix prefix); void (*freeArcLabel) (ArcLabel arcLabel); } CFNATransformAPI; getInitialPrefix returns initial prefix value like now this code does: initkey.prefix.colors[0] = UNKNOWN_COLOR; initkey.prefix.colors[1] = UNKNOWN_COLOR; prefixContains are exactly same as function with this name. getPrefixes and getArcs cycle step work of addKeys an addArcs. freePrefix and freeArcLabel frees used memory of Prefix and ArcLabel strutures. Additionally regex engine should provide correct way to examine colormap. int getColorCharsCount(colormap *cm, color c); pg_wchar *getColorChars(colormap *cm, color c); getColorCharsCount would return -1 if this color should be considered as unexpandable. Now I have working implemetation of this API. Comments still need rework. Could you give me any feedback? -- With best regards, Alexander Korotkov. trgm-regexp-0.13.patch.gz Description: GNU Zip compressed data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SIGHUP not received by custom bgworkers if postmaster is notified
Euler Taveira escribió: On 21-03-2013 05:06, Michael Paquier wrote: While playing with custom background workers, I noticed that postmaster does not notify its registered bgworkers if it receives SIGHUP, so you have to send a SIGHUP directly to the bgworker process to notify it. Signal handling is correctly done for SIGQUIT and SIGTERM for shutdown only. Attached is a patch fixing that, I simply added a call to SignalUnconnectedWorkers in SIGHUP_handler:postmaster.c. Per this discussion [1], it seems it is as is by design. AFAICS controlling when change configuration parameters is a feature not a bug. Alvaro said that will include SIGHUP handle in worker_spi (see [2] for how to process configurantion file). They are opposite ends of the problem. Worker code needs a SIGHUP signal handler, whatever that is (most likely something that causes the configuration to be reread), which is what Guillaume's patch is about; but postmaster needs to *send* a SIGHUP to its bgworker children, which is what Michael is on about. Currently postmaster signals children that are connected to shmem, but it's not considering those that aren't connected. At least that's how I understand the issue at hand, without actually looking deeper into it. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] hstore compiler warnings
On 03/21/2013 10:46 AM, Kevin Grittner wrote: Is anyone looking at these?: hstore_io.c: In function ‘hstore_to_json_loose’: hstore_io.c:1312:5: warning: ignoring return value of ‘strtol’, declared with attribute warn_unused_result [-Wunused-result] hstore_io.c:1324:6: warning: ignoring return value of ‘strtod’, declared with attribute warn_unused_result [-Wunused-result] gcc (Ubuntu/Linaro 4.7.2-2ubuntu1) 4.7.2 These warnings seem to have started with: http://git.postgresql.org/gitweb/?p=postgresql.git;h=38fb4d978c5bfc377ef979e2595e3472744a3b05 I thought we'd got rid of those. And they don't pop up on my F16 dev box (gcc 4.6.2) I'll check it out. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [COMMITTERS] pgsql: Fix element @ range cost estimation.
Heikki Linnakangas heikki.linnakan...@iki.fi wrote: Fix element @ range cost estimation. The statistics-based cost estimation patch for range types broke that, by incorrectly assuming that the left operand of all range oeprators is a range. That lead to a type x is not a range type error. Because it took so long for anyone to notice, add a regression test for that case. We still don't do proper statistics-based cost estimation for that, so you just get a default constant estimate. We should look into implementing that, but this patch at least fixes the regression. Something is still wrong. When I run `make check-world` everything passes, but `make installcheck-world` on a fresh initdb with default configuration yields this: *** /home/kgrittn/pg/master/src/test/regress/expected/rangetypes.out 2013-03-21 09:17:39.445690793 -0500 --- /home/kgrittn/pg/master/src/test/regress/results/rangetypes.out 2013-03-21 10:13:20.201683022 -0500 *** *** 1048,1058 create index test_range_elem_idx on test_range_elem (i); insert into test_range_elem select i from generate_series(1,100) i; select count(*) from test_range_elem where i @ int4range(10,50); ! count ! --- ! 40 ! (1 row) ! drop table test_range_elem; -- -- Btree_gist is not included by default, so to test exclusion --- 1048,1054 create index test_range_elem_idx on test_range_elem (i); insert into test_range_elem select i from generate_series(1,100) i; select count(*) from test_range_elem where i @ int4range(10,50); ! ERROR: type 23 is not a range type drop table test_range_elem; -- -- Btree_gist is not included by default, so to test exclusion == -- Kevin Grittner EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [COMMITTERS] pgsql: Fix element @ range cost estimation.
On 21.03.2013 17:20, Kevin Grittner wrote: Something is still wrong. When I run `make check-world` everything passes, but `make installcheck-world` on a fresh initdb with default configuration yields this: Huh, that's strange. It works for me, and the build farm is happy. I must ask: are you sure the server is running with fresh binaries? If it still fails, could you try to get a stack trace or something? - Heikki -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Request for Patch Feedback: Lag Lead Window Functions Can Ignore Nulls
The SQL standard defines a RESPECT NULLS or IGNORE NULLS option for lead, lag, [...]. This is not implemented in PostgreSQL (http://www.postgresql.org/docs/devel/static/functions-window.html) I've had a go at implementing this, and I've attached the resulting patch. It's not finished yet (there's no documentation and no tests for a start), but I was hoping to find out if my solution is along the right lines. In particular, I'm storing the ignore-nulls flag in the frameOptions of a window function definition, and am adding a function to the windowapi.h to get at these options. I'm keeping the last non-null value in WinGetPartitionLocalMemory (which I hope is the right place), but I'm not using any of the *GetDatum macros to access it. An example of my change's behaviour: nwhite=# select *, lag(num,0) ignore nulls over (order by generate_series) from nwhite-# (select generate_series from generate_series(0,10)) s nwhite-# left outer join nwhite-# numbers n nwhite-# on (s.generate_series = n.num); generate_series | num | lag -+-+- 0 | | 1 | 1 | 1 2 | | 1 3 | | 1 4 | 4 | 4 5 | 5 | 5 6 | | 5 7 | | 5 8 | | 5 9 | 9 | 9 10 | | 9 (11 rows) I'd find this feature really useful, so I hope you can help me get my patch to a contributable state. Thanks - Nick lead-lag-ignore-nulls.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Remove invalid indexes from pg_dump Was: [HACKERS] Support for REINDEX CONCURRENTLY
Hi, On Sun, Mar 17, 2013 at 9:24 PM, Michael Paquier michael.paqu...@gmail.com wrote: Please find attached the patches wanted: - 20130317_dump_only_valid_index.patch, a 1-line patch that makes pg_dump not take a dump of invalid indexes. This patch can be backpatched to 9.0. The patch seems to change pg_dump so that it ignores an invalid index only when the remote server version = 9.0. But why not when the remote server version 9.0? I think that you should start new thread to get much attention about this patch if there is no enough feedback. If failures happen with CREATE INDEX CONCURRENTLY, the system will be let with invalid indexes. I don't think that the user would like to see invalid indexes of an existing system being recreated as valid after a restore. So why not removing from a dump invalid indexes with something like the patch attached? This should perhaps be applied in pg_dump for versions down to 8.2 where CREATE INDEX CONCURRENTLY has been implemented? I noticed some recent discussions about that: http://www.postgresql.org/message-id/20121207141236.gb4...@alvh.no-ip.org In this case the problem has been fixed in pg_upgrade directly. Regards, -- Michael 20130317_dump_only_valid_index.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] sql_drop Event Triggerg
On Wed, Mar 20, 2013 at 5:42 PM, Alvaro Herrera alvhe...@2ndquadrant.com wrote: Here's a new version of this patch, rebased on top of the new pg_identify_object() stuff. Note that the regression test doesn't work yet, because I didn't adjust to the new identity output definition (the docs need work, too). But that's a simple change to do. I'm leaving that for later. I think this is getting there. A few things to think about: - pg_event_trigger_dropped_objects seems to assume that currentEventTriggerState will be pointing to the same list on every call. But is that necessarily true? I'm thinking about a case where someone opens a cursor in an event trigger and then tries to read from that cursor later in the transaction. I think you might be able to crash the server that way. - I am not wild about the idea of propagating PG_TRY/PG_CATCH blocks into yet more places. On Linux-x86 they are pretty cheap because Linux doesn't need a system call to change the signal mask and x86 has few registers that must be saved-and-restored, but elsewhere this can be a performance problem. Now maybe ProcessUtility is not a sufficiently-frequently called function for this to matter... but I'm not sure. The alternative is to teach the error handling pathways about this in somewhat greater detail, since the point of TRY/CATCH is to cleanup things that the regular error handling stuff doesn't now about. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [COMMITTERS] pgsql: Fix element @ range cost estimation.
Heikki Linnakangas hlinnakan...@vmware.com wrote: Huh, that's strange. It works for me, and the build farm is happy. I must ask: are you sure the server is running with fresh binaries? If it still fails, could you try to get a stack trace or something? Hmm. Just to be sure I used maintainer-clean and another initdb and now I don't see it. If I see it again I'll get a stack trace, but for now it's not throwing the error. Sorry for the noise. -- Kevin Grittner EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Strange Windows problem, lock_timeout test request
On Mon, Mar 18, 2013 at 10:09 AM, Tom Lane t...@sss.pgh.pa.us wrote: Boszormenyi Zoltan z...@cybertec.at writes: How about the attached patch over current GIT? In other words, why I am wrong with this idea? Because it's wrong. Removing volatile means that the compiler is permitted to optimize away stores (and fetches!) on the basis of their being unnecessary according to straight-line analysis of the code. Write barriers don't fix that, they only say that stores that the compiler chooses to issue at all have to be ordered a certain way. I don't think this is correct. The read and write barriers as implemented are designed to function as compiler barriers also, just as they do in the Linux kernel and every other piece of software I've found that implements anything remotely like this, with the lone exception of PostgreSQL. In PostgreSQL, spinlock acquisition and release are defined as CPU barriers but not a compiler barrier, and this necessitates extensive use of volatile all over the code base which would be unnecessary if we did this the way it's done in Linux and elsewhere. However, Zoltan's patch probably isn't right either. First, a write barrier isn't ever the correct solution when there's only one process involved - which is the case here, because the relevant variables are in backend-private memory. A compiler barrier - which is generally far cheaper - might be the right thing, though. However, the position of the barriers in his proposed patch is suspect. As implemented, he's proposing to force each change to alarm_enabled to be scheduled by the compiler before any other writes to memory are completed. But there's no guard against the compiler sliding the change backward, only forward. Now maybe that doesn't matter, if we're only concerned about the timing of updates of alarm_enabled relative to other updates of that same variable. But if there are multiple variables involved, then it matters. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Should commit_delay be PGC_SIGHUP?
On Wed, Mar 20, 2013 at 5:50 PM, Peter Geoghegan p...@heroku.com wrote: I realize that this isn't terribly critical, but I'd like to suggest that commit_delay be made PGC_SIGHUP on 9.3 (it's currently PGC_USERSET). It's not that a poorly chosen commit_delay setting has the potential to adversely affect other backends where the setting *has* been set in those other backends in a suitable way - the same thing can surely be said for work_mem. It just seems to me that commit_delay is now something that's intended to work at the cluster granularity, and as such it seems like almost a misrepresentation to make it PGC_USERSET. The fact is that whichever backend happens to end up becoming the group commit leader from one XLogFlush() call to the next is, for all practical purposes, unpredictable. You cannot reasonably hope to avoid a delay within an important transaction that needs to prioritize keeping its own latency low over total cluster throughput. If you set commit_delay to 0 in your important transaction with this is mind, your chances of becoming the group commit leader and avoiding the delay are slim to almost none. Much more often than not, the important transaction will end up becoming a group commit follower, and it'll still spend a significant fraction of commit_delay (about 1/2, on average) blocking on LWLockAcquireOrWait(). This may be true, but so what? We don't generally restrict changing GUC settings on the grounds that people probably won't wish to do so because it isn't useful. We restrict it in situations where it is not technically possible or is liable to be harmful. I'm of the opinion that we should try to keep as many things PGC_USERSET as we possibly can. It makes life easier for DBAs. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review]
On Tue, Mar 12, 2013 at 11:36 AM, Greg Stark st...@mit.edu wrote: On Tue, Mar 12, 2013 at 9:06 AM, Greg Smith g...@2ndquadrant.com wrote: That's jumping right over a few rounds of simpler ways to do this, and just going right to the approach we know allows adding more such options later with minimal grammar impact. As Craig intimated, the minimal grammar impact would be simply BEGIN; set persistent maintenance_work_mem='2GB'; set persistent work_mem='2GB'; COMMIT; Sending the sighup at transaction end seems like a fairly safe thing to do too. It's hard to imagine it failing and if it did the worst case would be that other backends would still have the old values too. I'm late to the party here, but my feeling is that we ought NOT to send a SIGHUP automatically. For various reasons already discussed, that's not going to be completely transparent to the DBA. And when things are not fully transparent, IME, it's better not to do them automatically. DBAs like automation when it's 100% reliable - but they don't like the database to automatically do things that can have unforeseen side effects. Also, while I think that MOST people will probably want a SIGHUP right after SET PERSISTENT, I am not sure that EVERYONE will want that. If you want it and it doesn't happen automatically, you can always do it by hand. If you don't want it and it does happen automatically, you're out of luck: go back to hand-editing config files. I really don't want to leave people with any more reasons to continue hand-editing the config files than we truly must. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Быстрые вклады с прибылью от 30% до 90% ежемесячно!
Terribly sorry about this -- my fault. -- Álvaro Herrera -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] hstore compiler warnings
On 03/21/2013 11:18 AM, Andrew Dunstan wrote: On 03/21/2013 10:46 AM, Kevin Grittner wrote: Is anyone looking at these?: hstore_io.c: In function ‘hstore_to_json_loose’: hstore_io.c:1312:5: warning: ignoring return value of ‘strtol’, declared with attribute warn_unused_result [-Wunused-result] hstore_io.c:1324:6: warning: ignoring return value of ‘strtod’, declared with attribute warn_unused_result [-Wunused-result] gcc (Ubuntu/Linaro 4.7.2-2ubuntu1) 4.7.2 These warnings seem to have started with: http://git.postgresql.org/gitweb/?p=postgresql.git;h=38fb4d978c5bfc377ef979e2595e3472744a3b05 I thought we'd got rid of those. And they don't pop up on my F16 dev box (gcc 4.6.2) I'll check it out. This is the only thing that I have found to work with very recent gcc. Seems ugly. Anyone got a better idea? cheers andrew diff --git a/contrib/hstore/hstore_io.c b/contrib/hstore/hstore_io.c index a9a55d8..db63ae9 100644 --- a/contrib/hstore/hstore_io.c +++ b/contrib/hstore/hstore_io.c @@ -1308,8 +1308,9 @@ hstore_to_json_loose(PG_FUNCTION_ARGS) * value */ char *endptr = junk; + long ljunk = strtol(src-data, endptr, 10); - (void) strtol(src-data, endptr, 10); + (void) ljunk; if (*endptr == '\0') { /* @@ -1321,7 +1322,9 @@ hstore_to_json_loose(PG_FUNCTION_ARGS) else { /* not an int - try a double */ - (void) strtod(src-data, endptr); + double djunk = strtod(src-data, endptr); + + (void) djunk; if (*endptr == '\0') is_number = true; -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Should commit_delay be PGC_SIGHUP?
On 21 March 2013 18:27, Robert Haas robertmh...@gmail.com wrote: This may be true, but so what? We don't generally restrict changing GUC settings on the grounds that people probably won't wish to do so because it isn't useful. We restrict it in situations where it is not technically possible or is liable to be harmful. I'm of the opinion that we should try to keep as many things PGC_USERSET as we possibly can. It makes life easier for DBAs. Only one setting will be best for the whole cluster, so neither the user nor the DBA gains if a user sets this to a different value than the one that has been determined to be optimal. Since we wait while holding the lock it is actually harmful to everyone if anybody sets a stupid value and might even be considered a denial of service attack. So there is a very good reason to make this SIGHUP, not just a whim. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review]
On 3/21/13 2:38 PM, Robert Haas wrote: Also, while I think that MOST people will probably want a SIGHUP right after SET PERSISTENT, I am not sure that EVERYONE will want that. If you want it and it doesn't happen automatically, you can always do it by hand. This is a fair position, and since that's how the feature as written right now works that helps. I think proceeding this way needs to hand some sort of hint back to the user though, telling them the change isn't active until SIGHUP. The path I don't want to see if where someone uses SET PERSISTENT and can't figure out why nothing changed. It should be as obvious as we can make it to someone that the explicit reload is necessary. -- Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] hstore compiler warnings
Andrew Dunstan and...@dunslane.net wrote: [assign to local variable and then `(void) varname;` to avoid unused warning] This is the only thing that I have found to work with very recent gcc. Seems ugly. Anyone got a better idea? That seems to be what we've used elsewhere. http://www.postgresql.org/message-id/24446.1318973...@sss.pgh.pa.us -- Kevin Grittner EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review]
Greg Smith escribió: On 3/21/13 2:38 PM, Robert Haas wrote: Also, while I think that MOST people will probably want a SIGHUP right after SET PERSISTENT, I am not sure that EVERYONE will want that. If you want it and it doesn't happen automatically, you can always do it by hand. This is a fair position, and since that's how the feature as written right now works that helps. I think proceeding this way needs to hand some sort of hint back to the user though, telling them the change isn't active until SIGHUP. The path I don't want to see if where someone uses SET PERSISTENT and can't figure out why nothing changed. It should be as obvious as we can make it to someone that the explicit reload is necessary. Maybe add some syntax to prevent the SIGHUP for the rare case where that is wanted, say SET PERSISTENT (reload=off) var=val; (perhaps WITH at the end, dunno) -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Should commit_delay be PGC_SIGHUP?
On Thu, Mar 21, 2013 at 6:27 PM, Robert Haas robertmh...@gmail.com wrote: This may be true, but so what? We don't generally restrict changing GUC settings on the grounds that people probably won't wish to do so because it isn't useful. We restrict it in situations where it is not technically possible or is liable to be harmful. Sure, but that isn't what I'm concerned about. I'm concerned about people being lulled into a false sense of security about setting commit_delay to 0 locally. If they do that, their actual additional delay at commit time may well be only marginally less than the full commit_delay, and will only rarely actually be 0. -- Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [COMMITTERS] pgsql: Fix element @ range cost estimation.
Kevin Grittner kgri...@ymail.com writes: Hmm. Just to be sure I used maintainer-clean and another initdb and now I don't see it. If I see it again I'll get a stack trace, but for now it's not throwing the error. It happened to me way too many times to have to do maintainer-clean for reasons I didn't understand, and I've been told that when it happens, you have to look at the how the build is done. It might be useful to talk some more about those strange cases where we have to maintainer-clean our local copy for things to get back to normal so that we are able to fix the build scripts down the road. Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Single-argument variant for array_length and friends?
On Thu, Mar 21, 2013 at 2:00 AM, Pavel Stehule pavel.steh...@gmail.com wrote: 2013/3/21 Brendan Jurd dire...@gmail.com: On 21 March 2013 17:32, Pavel Stehule pavel.steh...@gmail.com wrote: If I though about it more, I like to more limit one parametric array_length function just for only 1D array. So it is your A use case. But I understand so this variant is not orthogonal. Hard to say, what is better. Yes, for me (a) is running a very close 2nd place to (c). The strength of (a) is it means we aren't making guesses about the user's intention. When a user concocts an expression that is ambiguous, I feel it is usually good to kick it back to them and ask them to be more precise. On the other hand, I find it very natural to interpret what is the length of my multidim array to mean what is the length of the outermost dimension of my multidim array, because to me a multidim array is just an array that contains more arrays. lot of postgresql functions calculate with all items in array without respect to dimensions - like unnest. so concept use outermost dim is not in pg now, and should not be introduced if it is possible. More it goes against a verbosity concept introduced by ADA and reused in PL/SQL and PL/pgSQL. and pl/psm* merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SIGHUP not received by custom bgworkers if postmaster is notified
On Fri, Mar 22, 2013 at 12:15 AM, Alvaro Herrera alvhe...@2ndquadrant.comwrote: Euler Taveira escribió: On 21-03-2013 05:06, Michael Paquier wrote: While playing with custom background workers, I noticed that postmaster does not notify its registered bgworkers if it receives SIGHUP, so you have to send a SIGHUP directly to the bgworker process to notify it. Signal handling is correctly done for SIGQUIT and SIGTERM for shutdown only. Attached is a patch fixing that, I simply added a call to SignalUnconnectedWorkers in SIGHUP_handler:postmaster.c. Per this discussion [1], it seems it is as is by design. AFAICS controlling when change configuration parameters is a feature not a bug. Alvaro said that will include SIGHUP handle in worker_spi (see [2] for how to process configurantion file). They are opposite ends of the problem. Worker code needs a SIGHUP signal handler, whatever that is (most likely something that causes the configuration to be reread), which is what Guillaume's patch is about; but postmaster needs to *send* a SIGHUP to its bgworker children, which is what Michael is on about. Currently postmaster signals children that are connected to shmem, but it's not considering those that aren't connected. At least that's how I understand the issue at hand, without actually looking deeper into it. Yes, that's exactly the problem. And I believe that the postmaster should also notify its registered bgworkers if it receives a SIGHUP as it does for its other backends. Have a look at the 1-line patch I sent to see how I fixed that... -- Michael
Re: [HACKERS] autovacuum not prioritising for-wraparound tables
Here's another cut at this patch. This is mainly about the infrastructure to pass the data around in autovacuum; the proposed formulas probably need lot of work. We still have two terms in autovacuum priority, the first one considers dead tuples and the second one considers wraparound limit. I have kept Chris' proposal for the second term, but refined the first one a bit per Jim Nasby's suggestion of discounting dead space. So we now have return (d-p_deadtuples * (d-p_livetuples + d-p_deadtuples) * d-p_relpages) / ((d-p_livetuples + 1) * d-p_reltuples * nblocks) + exp(d-p_xidage * logf(nblocks) / UINT_MAX); Here, deadtuples and livetuples come from pgstat data, while relpages and reltuples come from pg_class. nblocks, on the other hand, comes from the actual number of blocks in the table. I haven't considered the case where pg_class.reltuples = 0 (which results in division-by-zero), but I think to be really robust here we'd want to have some code copied from estimate_rel_size; or maybe simply use some hardcoded magic value. I lean towards the latter, because I'm not sure we want to expend a relation open at this point (incurring an attempt to lock the table, which could be problematic); hence the new RelFileNodeGetNumberOfBlocks() thingy, which is admittedly pretty ugly, not to mention untested. (I have considered livetuples=0 however, hence the +1 there). I think we now need to have a more focused discussion on useful formulas to use here. One thing I noticed that fails in the above formula is that as nblocks grows, ceteris paribus, the score falls; but that's wrong, because if you have a table that turns out to have much larger nblocks because it bloated and pgstat lost the message, we need to look harder at it. So somehow we need to consider the tuple density as given by pg_class.reltuples/pg_class.relpages, and compare with the one given by pgstat.(live+dead) / nblocks; and raise the score as the ratio goes down (in normal conditions the ratio should be 1; a bloated table that pgstat hasn't noticed will have a lower ratio). -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services *** a/src/backend/postmaster/autovacuum.c --- b/src/backend/postmaster/autovacuum.c *** *** 76,81 --- 76,82 #include catalog/dependency.h #include catalog/namespace.h #include catalog/pg_database.h + #include catalog/pg_tablespace.h #include commands/dbcommands.h #include commands/vacuum.h #include lib/ilist.h *** *** 98,103 --- 99,105 #include utils/memutils.h #include utils/ps_status.h #include utils/rel.h + #include utils/relmapper.h #include utils/snapmgr.h #include utils/syscache.h #include utils/timeout.h *** *** 167,173 typedef struct avw_dbase PgStat_StatDBEntry *adw_entry; } avw_dbase; ! /* struct to keep track of tables to vacuum and/or analyze, in 1st pass */ typedef struct av_relation { Oid ar_toastrelid; /* hash key - must be first */ --- 169,175 PgStat_StatDBEntry *adw_entry; } avw_dbase; ! /* struct to keep track of TOAST-main relation mappings */ typedef struct av_relation { Oid ar_toastrelid; /* hash key - must be first */ *** *** 177,182 typedef struct av_relation --- 179,217 * reloptions, or NULL if none */ } av_relation; + /* + * A tasklist is a set of tables to process, collected during a worker's first + * phase. For each table we keep track of its vacuum priority, so that we can + * process them in priority order. + */ + typedef struct avw_tltable + { + Oid tt_reloid; + float4 tt_vacuum_prio; + } avw_tltable; + + /* a list of the above elements */ + typedef struct avw_tasklist + { + int tl_maxelts; + int tl_nelts; + avw_tltable **tl_elts; + } avw_tasklist; + + /* data to compute a table's priority value */ + typedef struct avw_priodata + { + float4 p_deadtuples; + float4 p_livetuples; + float4 p_vacthresh; + float4 p_relpages; + float4 p_reltuples; + RelFileNode p_filenode; + int p_xidage; + int p_frzmaxage; + } avw_priodata; + + /* struct to keep track of tables to vacuum and/or analyze, after rechecking */ typedef struct autovac_table { *** *** 296,305 static void FreeWorkerInfo(int code, Datum arg); static autovac_table *table_recheck_autovac(Oid relid, HTAB *table_toast_map, TupleDesc pg_class_desc); ! static void relation_needs_vacanalyze(Oid relid, AutoVacOpts *relopts, Form_pg_class classForm, ! PgStat_StatTabEntry *tabentry, ! bool *dovacuum, bool *doanalyze, bool *wraparound); static void autovacuum_do_vac_analyze(autovac_table *tab, BufferAccessStrategy bstrategy); --- 331,341 static autovac_table *table_recheck_autovac(Oid relid, HTAB *table_toast_map, TupleDesc
Re: [HACKERS] Support for REINDEX CONCURRENTLY
Is someone planning to provide additional feedback about this patch at some point? Thanks, -- Michael
Re: [HACKERS] Strange Windows problem, lock_timeout test request
Robert Haas robertmh...@gmail.com writes: On Mon, Mar 18, 2013 at 10:09 AM, Tom Lane t...@sss.pgh.pa.us wrote: Because it's wrong. Removing volatile means that the compiler is permitted to optimize away stores (and fetches!) on the basis of their being unnecessary according to straight-line analysis of the code. Write barriers don't fix that, they only say that stores that the compiler chooses to issue at all have to be ordered a certain way. I don't think this is correct. The read and write barriers as implemented are designed to function as compiler barriers also, just as they do in the Linux kernel and every other piece of software I've found that implements anything remotely like this, with the lone exception of PostgreSQL. In PostgreSQL, spinlock acquisition and release are defined as CPU barriers but not a compiler barrier, and this necessitates extensive use of volatile all over the code base which would be unnecessary if we did this the way it's done in Linux and elsewhere. I think you're just as mistaken as Zoltan. Barriers enforce ordering of operations, not whether an operation occurs at all. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Enabling Checksums
On Wed, 2013-03-20 at 02:11 +0200, Ants Aasma wrote: Fletcher is also still a strong contender, we just need to replace the 255 modulus with something less prone to common errors, maybe use 65521 as the modulus. I'd have to think how to best combine the values in that case. I believe we can lose the property that neither byte can be zero, just avoiding both being zero seems good enough to me. Agreed on all points. I've been following your analysis and testing, and it looks like there are still at least three viable approaches: 1. Some variant of Fletcher 2. Some variant of CRC32 3. Some SIMD-based checksum Each of those has some open implementation questions, as well. If we settle on one of those approaches, we don't necessarily need the fastest implementation right away. I might even argue that the first patch to be committed should be a simple implementation of whatever algorithm we choose, and then optimization should be done in a separate patch (if it is tricky to get right). Of course, it's hard to settle on the general algorithm to use without knowing the final performance numbers. So right now I'm in somewhat of a holding pattern until we settle on something. Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Should commit_delay be PGC_SIGHUP?
On Wed, Mar 20, 2013 at 09:50:55PM +, Peter Geoghegan wrote: The fact is that whichever backend happens to end up becoming the group commit leader from one XLogFlush() call to the next is, for all practical purposes, unpredictable. You cannot reasonably hope to avoid a delay within an important transaction that needs to prioritize keeping its own latency low over total cluster throughput. If you set commit_delay to 0 in your important transaction with this is mind, your chances of becoming the group commit leader and avoiding the delay are slim to almost none. Much more often than not, the important transaction will end up becoming a group commit follower, and it'll still spend a significant fraction of commit_delay (about 1/2, on average) blocking on LWLockAcquireOrWait(). I acknowledge that SET commit_delay = 0 does not usefully reduce latency for a transaction running on a system subject to a uniform, high commit rate. It is useful for other things. Suppose you have a low-concurrency system with commit_delay=0 in postgresql.conf, but you occasionally spin up a parallel task that benefits from nonzero commit_delay. Changing commit_delay in the task's sessions is a decent approximation of, and more convenient than, temporarily modifying postgresql.conf. -- Noah Misch EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: postgres_fdw vs data formatting GUCs (was Re: [HACKERS] [v9.3] writable foreign tables)
This contains some edits to comments that referred to the obsolete and bogus TupleDesc scanning. No mechanical alterations. --- a/contrib/dblink/dblink.c +++ b/contrib/dblink/dblink.c @@ -2961,9 +2961,8 @@ initRemoteGucs(remoteGucs *rgs, PGconn *conn) } /* - * Scan a TupleDesc and, should it contain types that are sensitive to - * GUCs, acquire remote GUCs and set them in a new GUC nesting level. - * This is undone with restoreLocalGucs. + * Acquire remote GUCs that may affect type parsing and set them in a + * new GUC nesting level. */ static void applyRemoteGucs(remoteGucs *rgs) @@ -2974,11 +2973,8 @@ applyRemoteGucs(remoteGucs *rgs) int addedGucNesting = false; /* -* Affected types require local GUC manipulations. Create a new -* GUC NestLevel to overlay the remote settings. -* -* Also, this nesting is done exactly once per remoteGucInfo -* structure, so expect it to come with an invalid NestLevel. +* This nesting is done exactly once per remoteGucInfo structure, +* so expect it to come with an invalid NestLevel. */ Assert(rgs-localGUCNestLevel == -1); diff --git a/contrib/dblink/expected/dblink.out b/contrib/dblink/expected/dblink.out index 3946485..579664e 100644 --- a/contrib/dblink/expected/dblink.out +++ b/contrib/dblink/expected/dblink.out @@ -930,9 +930,8 @@ SELECT dblink_exec('myconn', 'SET datestyle = GERMAN, DMY;'); SET (1 row) --- The following attempt test various paths at which TupleDescs are --- formed and inspected for containment of types requiring local GUC --- setting. +-- The following attempt test various paths at which tuples are formed +-- and inspected for containment of types requiring local GUC setting. -- single row synchronous case SELECT * FROM dblink('myconn', diff --git a/contrib/dblink/sql/dblink.sql b/contrib/dblink/sql/dblink.sql index de925eb..7ff43fd 100644 --- a/contrib/dblink/sql/dblink.sql +++ b/contrib/dblink/sql/dblink.sql @@ -435,9 +435,8 @@ SET timezone = UTC; SELECT dblink_connect('myconn','dbname=contrib_regression'); SELECT dblink_exec('myconn', 'SET datestyle = GERMAN, DMY;'); --- The following attempt test various paths at which TupleDescs are --- formed and inspected for containment of types requiring local GUC --- setting. +-- The following attempt test various paths at which tuples are formed +-- and inspected for containment of types requiring local GUC setting. -- single row synchronous case SELECT * -- fdr dblink-guc-sensitive-types-v7.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Should commit_delay be PGC_SIGHUP?
Simon Riggs si...@2ndquadrant.com writes: Only one setting will be best for the whole cluster, so neither the user nor the DBA gains if a user sets this to a different value than the one that has been determined to be optimal. Since we wait while holding the lock it is actually harmful to everyone if anybody sets a stupid value and might even be considered a denial of service attack. So there is a very good reason to make this SIGHUP, not just a whim. Hmm. If a malicious user could hurt performance for other sessions with a bad setting of commit_delay, then USERSET is clearly a bad idea. But it still seems like it could be SUSET rather than SIGHUP. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review]
On Friday, March 22, 2013 12:48 AM Alvaro Herrera wrote: Greg Smith escribió: On 3/21/13 2:38 PM, Robert Haas wrote: Also, while I think that MOST people will probably want a SIGHUP right after SET PERSISTENT, I am not sure that EVERYONE will want that. If you want it and it doesn't happen automatically, you can always do it by hand. This is a fair position, and since that's how the feature as written right now works that helps. I think proceeding this way needs to hand some sort of hint back to the user though, telling them the change isn't active until SIGHUP. The path I don't want to see if where someone uses SET PERSISTENT and can't figure out why nothing changed. It should be as obvious as we can make it to someone that the explicit reload is necessary. Maybe add some syntax to prevent the SIGHUP for the rare case where that is wanted, say SET PERSISTENT (reload=off) var=val; (perhaps WITH at the end, dunno) I think adding new syntax change is little scary for me, not for the matter of implementation but for building consensus on syntax. Can we do it as an enhancement later? Other ways could be to either give Notice after each command or mention the same in documentation clearly. I feel giving Notice after every command doesn't look good, so may be we can mention the same in documentation. With Regards, Amit Kapila. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review]
Amit Kapila escribió: On Friday, March 22, 2013 12:48 AM Alvaro Herrera wrote: Maybe add some syntax to prevent the SIGHUP for the rare case where that is wanted, say SET PERSISTENT (reload=off) var=val; (perhaps WITH at the end, dunno) I think adding new syntax change is little scary for me, not for the matter of implementation but for building consensus on syntax. I cannot but agree on that point. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review]
On 3/21/13 10:39 PM, Amit Kapila wrote: I think adding new syntax change is little scary for me, not for the matter of implementation but for building consensus on syntax. Can we do it as an enhancement later? Yeah, I think the basics of this could be committed, but have later improvements that let you fine-tune when the reload happens better. Other ways could be to either give Notice after each command or mention the same in documentation clearly. I feel giving Notice after every command doesn't look good, so may be we can mention the same in documentation. I think that NOTICE after every command is the only way we'll make sure to catch every user who should be notified about the feature's limitation. I was thinking of something like this: NOTICE: New configuration is written but not applied yet. Signal the server to reload settings using pg_reload_conf to activate the change. Maybe that's a NOTICE plus a HINT. -- Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review]
On Friday, March 22, 2013 8:57 AM Alvaro Herrera wrote: Amit Kapila escribió: On Friday, March 22, 2013 12:48 AM Alvaro Herrera wrote: Maybe add some syntax to prevent the SIGHUP for the rare case where that is wanted, say SET PERSISTENT (reload=off) var=val; (perhaps WITH at the end, dunno) I think adding new syntax change is little scary for me, not for the matter of implementation but for building consensus on syntax. I cannot but agree on that point. Sorry, I don't get your point. Do you mean to say that you don't agree with me and want new syntax as proposed by you to be implemented? With Regards, Amit Kapila. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Page replacement algorithm in buffer cache
Hello all, Sorry if this is a naive question. I was going through Greg Smith's slides on buffer cache(http://www.westnet.com/~gsmith/content/postgresql/InsideBufferCache.pdf). When going through the page replacement algorithm that we use i.e. clocksweep algorithm, I felt a potential problem in our current system. Specifically, when a new entry is allocated in the buffer, it's USAGE_COUNT is set to 1. On each sweep of the algorithm, the USAGE_COUNT is decremented and an entry whose USAGE_COUNT becomes zero is replaced. I feel that this could lead to a bias towards replacement of relatively younger pages in the cache over older pages. An entry which has just entered the cache with USAGE_COUNT=1 could be replaced soon, but it may be needed frequently in the near future, which would result in it being repeatedly brought into the cache, leading to replacement overheads. I think this is the well known face off between LRU and MRU algorithms. How do we work around this problem? Regards, Atri -- Regards, Atri l'apprenant -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review]
Greg Smith g...@2ndquadrant.com writes: On 3/21/13 10:39 PM, Amit Kapila wrote: I feel giving Notice after every command doesn't look good, so may be we can mention the same in documentation. I think that NOTICE after every command is the only way we'll make sure to catch every user who should be notified about the feature's limitation. We have been around on that type of proposal before. A command that issues a NOTICE as part of its *standard* behavior is really not going to fly; the annoyance, talking-down-to-the-user factor is too high. IOW, if you think the command needs that, then its design is broken and you need to do better. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Page replacement algorithm in buffer cache
On Friday, March 22, 2013 10:22 AM Atri Sharma wrote: Hello all, Sorry if this is a naive question. I was going through Greg Smith's slides on buffer cache(http://www.westnet.com/~gsmith/content/postgresql/InsideBufferCac he.pdf). When going through the page replacement algorithm that we use i.e. clocksweep algorithm, I felt a potential problem in our current system. Specifically, when a new entry is allocated in the buffer, it's USAGE_COUNT is set to 1. On each sweep of the algorithm, the USAGE_COUNT is decremented and an entry whose USAGE_COUNT becomes zero is replaced. Yes, it is replaced but in the next clock sweep pass, not immediately after making 0. So till the time of next pass if nobody accesses the buffer and all other buffers have higher count, it can be replaced. Also the buffer, it has returned for which the usage count becomes 1, it will come to reduce the usage count only in next pass. So in whole, I think it needs 2 passes for a freshly returned buffer to be re-used incase no one uses it again. With Regards, Amit Kapila. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers