Re: [HACKERS] Re: proposal: a width specification for s specifier (format function), fix behave when positional and ordered placeholders are used
On 28 January 2013 20:32, Dean Rasheed dean.a.rash...@gmail.com wrote: In general a format specifier looks like: %[parameter][flags][width][.precision][length]type This highlights another problem with the current implementation --- the '-' flag and the width field need to be parsed separately. So '%-3s' should be parsed as a '-' flag followed by a width of 3, not as a width of -3, which is then interpreted as left-aligned. This might seem like nitpicking, but actually it is important: * In the future we might support more flags, and they can be specified in any order, so the '-' flag won't necessarily come immediately before the width. * The width field is optional, even if the '-' flag is specified. So '%-s' is perfectly legal and should be interpreted as '%s'. The current implementation treats it as a width of 0, which is wrong. * The width field might not be a number, it might be something like * or *3$. Note that the SUS allows a negative width to be passed in as a function argument using this syntax, in which case it should be treated as if the '-' flag were specified. Regards, Dean -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: proposal: a width specification for s specifier (format function), fix behave when positional and ordered placeholders are used
On 29 January 2013 08:19, Dean Rasheed dean.a.rash...@gmail.com wrote: * The width field is optional, even if the '-' flag is specified. So '%-s' is perfectly legal and should be interpreted as '%s'. The current implementation treats it as a width of 0, which is wrong. Oh, but of course a width of 0 is the same as no width at all, so the current code is correct after all. That's what happens if I try to write emails before I've had my caffeine :-) I think my other points remain valid though. It would still be neater to parse the flags separately from the width field, and then all literal numbers that appear in the format should be positive. Regards, Dean -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: Patches for TODO item: Avoid truncating empty OCDR temp tables on COMMIT
On 29.01.2013 04:41, Robert Haas wrote: On Mon, Jan 28, 2013 at 8:39 AM, Heikki Linnakangas hlinnakan...@vmware.com wrote: We already have that MyXactAccessedTempRel global flag. Just checking that should cover many common cases. +1 for that. I'm actually unconvinced that we need to do any better than that in general. But certainly that seems like a good first step. Ok, committed that. - Heikki -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_dump --pretty-print-views
Hi Marko, On Mon, Jan 28, 2013 at 5:01 PM, Marko Tiikkaja pgm...@joh.to wrote: On 1/28/13 12:14 PM, Jeevan Chalke wrote: I could not apply the patch with git apply, but able to apply it by patch -p1 command. IME that's normal for patches that went through filterdiff. I do: git diff |filterdiff --format=context to re-format the patches to the context diff preferred on the mailing list. Maybe if I somehow told git to produce context diff it would work.. However, will you please justify the changes done in xml.out ? I guess they are not needed. You might need to configure your sources with libxml. If you look very carefully, the pretty-printing version adds one space at the very beginning of the output. :-) That's fine. I am not at all pointing that to you. Have a look at this: *** a/src/test/regress/expected/xml.out --- b/src/test/regress/expected/xml.out *** *** 3,82 CREATE TABLE xmltest ( data xml ); INSERT INTO xmltest VALUES (1, 'valueone/value'); INSERT INTO xmltest VALUES (2, 'valuetwo/value'); INSERT INTO xmltest VALUES (3, 'wrong'); ! ERROR: invalid XML content LINE 1: INSERT INTO xmltest VALUES (3, 'wrong'); ^ ! DETAIL: line 1: Couldn't find end of Start Tag wrong line 1 ! wrong ! ^ . . . --- 3,84 data xml ); INSERT INTO xmltest VALUES (1, 'valueone/value'); + ERROR: unsupported XML feature + LINE 1: INSERT INTO xmltest VALUES (1, 'valueone/value'); +^ + DETAIL: This functionality requires the server to be built with libxml support. + HINT: You need to rebuild PostgreSQL using --with-libxml. INSERT INTO xmltest VALUES (2, 'valuetwo/value'); + ERROR: unsupported XML feature + LINE 1: INSERT INTO xmltest VALUES (2, 'valuetwo/value'); +^ + DETAIL: This functionality requires the server to be built with libxml support. + HINT: You need to rebuild PostgreSQL using --with-libxml. These changes are not at all required. Follow the hint. In other way, if I apply your patch and run make check I get regression failure for xml.out. Please check. Thanks Also, I am not sure about putting WRAP_COLUMN_DEFAULT by default. I will keep that in code committors plate. Rest of the code changes looks good to me. Thanks for reviewing! Regards, Marko Tiikkaja -- Jeevan B Chalke Senior Software Engineer, RD EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: +91 20 30589500 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message.
Re: [HACKERS] pg_dump --pretty-print-views
On 1/29/13 10:18 AM, Jeevan Chalke wrote: That's fine. I am not at all pointing that to you. Have a look at this: Ugh.. I'm sorry, I don't understand how this happened. I manually looked through all the changes, but somehow this slipped through. Will have a look this evening. Regards, Marko Tiikkaja -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation
On Tuesday, January 29, 2013 2:53 AM Heikki Linnakangas wrote: On 28.01.2013 15:39, Amit Kapila wrote: Rebased the patch as per HEAD. I don't like the way heap_delta_encode has intimate knowledge of how the lz compression works. It feels like a violent punch through the abstraction layers. Ideally, you would just pass the old and new tuple to pglz as char *, and pglz code would find the common parts. But I guess that's too slow, as that's what I originally suggested and you rejected that approach. But even if that's not possible on performance grounds, we don't need to completely blow up the abstraction. pglz can still do the encoding - the caller just needs to pass it the attribute boundaries to consider for matches, so that it doesn't need to scan them byte by byte. I came up with the attached patch. I wrote it to demonstrate the API, I'm not 100% sure the result after decoding is correct. I have checked the patch code, found few problems. 1. History will be old tuple, in that case below call needs to be changed /*return pglz_compress_with_history((char *) oldtup-t_data, oldtup-t_len, (char *) newtup-t_data, newtup-t_len, offsets, noffsets, (PGLZ_Header *) encdata, strategy);*/ return pglz_compress_with_history((char *) newtup-t_data, newtup-t_len, (char *) oldtup-t_data, oldtup-t_len, offsets, noffsets, (PGLZ_Header *) encdata, strategy); 2. The offset array is beginning of each column offset. In that case below needs to be changed. offsets[noffsets++] = off; off = att_addlength_pointer(off, thisatt-attlen, tp + off); if (thisatt-attlen = 0) slow = true;/* can't use attcacheoff anymore */ //offsets[noffsets++] = off; } Apart from this, some of the test cases are failing which I need to check. I have debugged the new code, it appears to me that this will not be as efficient as the current approach of patch. It needs to build hash table for history reference and comparison which can add overhead as compare to existing approach. I am taking the Performance and WAL Reduction data. Can there be another way with which current patch code can be made better, so that we don't need to change the encoding approach, as I am having feeling that this might not be performance wise equally good. With Regards, Amit Kapila. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PERFORM] pgbench to the MAXINT
On 28.01.2013 23:30, Gurjeet Singh wrote: On Sat, Jan 26, 2013 at 11:24 PM, Satoshi Nagayasusn...@uptime.jp wrote: 2012/12/21 Gurjeet Singhsingh.gurj...@gmail.com: The patch is very much what you had posted, except for a couple of differences due to bit-rot. (i) I didn't have to #define MAX_RANDOM_VALUE64 since its cousin MAX_RANDOM_VALUE is not used by code anymore, and (ii) I used ternary operator in DDLs[] array to decide when to use bigint vs int columns. Please review. As for tests, I am currently running 'pgbench -i -s 21474' using unpatched pgbench, and am recording the time taken;Scale factor 21475 had actually failed to do anything meaningful using unpatched pgbench. Next I'll run with '-s 21475' on patched version to see if it does the right thing, and in acceptable time compared to '-s 21474'. What tests would you and others like to see, to get some confidence in the patch? The machine that I have access to has 62 GB RAM, 16-core 64-hw-threads, and about 900 GB of disk space. I have tested this patch, and hvae confirmed that the columns for aid would be switched to using bigint, instead of int, when the scalefactor= 20,000. (aid columns would exeed the upper bound of int when sf21474.) Also, I added a few fixes on it. - Fixed to apply for the current git master. - Fixed to surpress few more warnings about INT64_FORMAT. - Minor improvement in the docs. (just my suggestion) I attached the revised one. Looks good to me. Thanks! Ok, committed. At some point, we might want to have a strtoll() implementation in src/port. - Heikki -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_basebackup with -R option and start standby have problems with escaped password
On Thu, Jan 24, 2013 at 7:04 AM, Hari Babu haribabu.ko...@huawei.com wrote: On Wed, Jan 23, 2013 11:48 PM, Magnus Hagander wrote: On Wed, Jan 23, 2013 at 10:18 AM, Hari Babu haribabu.ko...@huawei.com wrote: Test scenario to reproduce: 1. Start the server 2. create the user as follows ./psql postgres -c create user user1 superuser login password 'use''1' 3. Take the backup with -R option as follows. ./pg_basebackup -D ../../data1 -R -U user1 -W The following errors are occurring when the new standby on the backup database starts. FATAL: could not connect to the primary server: missing = after 1' in connection info string What does the resulting recovery.conf file look like? The recovery.conf which is generated is as follows standby_mode = 'on' primary_conninfo = 'user=''user1'' password=''use''1'' port=''5432'' ' I observed the problem is while reading primary_conninfo from the recovery.conf file the function GUC_scanstr removes the quotes of the string and also makes the continuos double quote('') as single quote('). By using the same connection string while connecting to primary server the function conninfo_parse the escape quotes are not able to parse properly and it is leading to problem. please correct me if any thing wrong in my observation. Well, it's clearly broken at least :O Zoltan, do you have time to look at it? I won't have time until at least after FOSDEM, unfortunately. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation
On 29.01.2013 11:58, Amit Kapila wrote: Can there be another way with which current patch code can be made better, so that we don't need to change the encoding approach, as I am having feeling that this might not be performance wise equally good. The point is that I don't want to heap_delta_encode() to know the internals of pglz compression. You could probably make my patch more like yours in behavior by also passing an array of offsets in the new tuple to check, and only checking for matches as those offsets. - Heikki -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [sepgsql 2/3] Add db_schema:search permission checks
On 15 January 2013 20:28, Kohei KaiGai kai...@kaigai.gr.jp wrote: This patch adds sepgsql support for permission checks equivalent to the existing SCHEMA USE privilege. This feature is constructed on new OAT_SCHEMA_SEARCH event type being invoked around pg_namespace_aclcheck(). Can you explain the exact detailed rationale behind this patch? Like URLs or other info that explains *why* we are doing this, what problems it causes if we don't, etc? Otherwise there is no reference point for a review. Other patch types like new features have syntax we can discuss and check, performance patches have measurements we can check. With this, it is just we add some checks. No idea if that is all the places we need, or whether there is a better way of doing this, or whether anyone cares if we do this or not. (Same comment for patch 3/3) -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] unlogged tables vs. GIST
Hi Heikki, On Mon, Jan 28, 2013 at 2:34 PM, Heikki Linnakangas hlinnakan...@vmware.com wrote: On 23.01.2013 17:30, Robert Haas wrote: On Wed, Jan 23, 2013 at 4:04 AM, Jeevan Chalke jeevan.chalke@enterprisedb.**com jeevan.cha...@enterprisedb.com wrote: I guess my earlier patch, which was directly incrementing ControlFile-unloggedLSN counter was the concern as it will take ControlFileLock several times. In this version of patch I did what Robert has suggested. At start of the postmaster, copying unloggedLSn value to XLogCtl, a shared memory struct. And in all access to unloggedLSN, using this shared variable using a SpinLock. And since we want to keep this counter persistent across clean shutdown, storing it in ControlFile before updating it. With this approach, we are updating ControlFile only when we shutdown the server, rest of the time we are having a shared memory counter. That means we are not touching pg_control every other millisecond or so. Also since we are not caring about crashes, XLogging this counter like OID counter is not required as such. On a quick read-through this looks reasonable to me, but others may have different opinions, and I haven't reviewed in detail. ... [a couple of good points] In addition to those things Robert pointed out: /* + * Temporary GiST indexes are not WAL-logged, but we need LSNs to detect + * concurrent page splits anyway. GetXLogRecPtrForUnloggedRel() provides a fake + * sequence of LSNs for that purpose. Each call generates an LSN that is + * greater than any previous value returned by this function in the same + * session using static counter + * Similarily unlogged GiST indexes are also not WAL-logged. But we need a + * persistent counter across clean shutdown. Use counter from ControlFile which + * is copied in XLogCtl.unloggedLSN to accomplish that + * If relation is UNLOGGED, return persistent counter from XLogCtl else return + * session wide temporary counter + */ +XLogRecPtr +GetXLogRecPtrForUnloggedRel(**Relation rel) From a modularity point of view, it's not good that you xlog.c needs to know about Relation struct. Perhaps move the logic to check the relation is unlogged or not to a function in gistutil.c, and only have a small GetUnloggedLSN() function in xlog.c I kept a function as is, but instead sending Relation as a parameter, it now takes boolean, isUnlogged. Added a MACRO for that. I'd suggest adding a separate spinlock to protect unloggedLSN. I'm not sure if there's much contention on XLogCtl-info_lck today, but nevertheless it'd be better to keep this separate, also from a modularity point of view. Hmm. OK. Added ulsn_lck for this. @@ -7034,6 +7078,8 @@ CreateCheckPoint(int flags) LWLockAcquire(ControlFileLock, LW_EXCLUSIVE); ControlFile-state = DB_SHUTDOWNING; ControlFile-time = (pg_time_t) time(NULL); + /* Store unloggedLSN value as we want it persistent across shutdown */ + ControlFile-unloggedLSN = XLogCtl-unloggedLSN; UpdateControlFile(); LWLockRelease(ControlFileLock)**; } This needs to acquire the spinlock to read unloggedLSN. OK. Done. Do we need to do anything to unloggedLSN in pg_resetxlog? I guess NO. Also, added Robert's comment in bufmgr.c I am still unsure about the spinlock around buf flags as we are just examining it. Please let me know if I missed any. Thanks - Heikki -- Jeevan B Chalke Senior Software Engineer, RD EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: +91 20 30589500 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. support_unlogged_gist_indexes_v3.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_ctl idempotent option
On Tue, Jan 29, 2013 at 04:19:15PM +1100, Josh Berkus wrote: OK, I had some time to think about this. Basically, we have three outcomes for pg_ctl start: server not running and pg_ctl start success server start failed server already running Can't we just assign different return values to these cases, e.g. 0, 1, 2? We already print output telling the user what happened. Not sure if that would work. Too many admin scripts only check for error output, and not what the error code was. FWIW, the Solaris/Opensolaris service scripts, as well as the RH service scripts (I think), already handle things this way. If you say: svcadm enable postgresql ... and postgres is already up, it just returns 0. So it's a common pattern. So, alternate suggestions to idempotent: --isup --isrunning --ignorerunning However, I'm really beginnging to think that a switch isn't correct, and what we need to do is to have a different pg_ctl *command*, e.g.: pg_ctl -D . on or pg_ctl -D . enable Yeah, that makes a lot of sense. I don't think I like --force because it isn't clear if we are forcing the start to have done something, or forcing the server to be running. Do we need this idempotent feature for stop too? -- Bruce Momjian br...@momjian.ushttp://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [sepgsql 2/3] Add db_schema:search permission checks
2013/1/29 Simon Riggs si...@2ndquadrant.com: On 15 January 2013 20:28, Kohei KaiGai kai...@kaigai.gr.jp wrote: This patch adds sepgsql support for permission checks equivalent to the existing SCHEMA USE privilege. This feature is constructed on new OAT_SCHEMA_SEARCH event type being invoked around pg_namespace_aclcheck(). Can you explain the exact detailed rationale behind this patch? Like URLs or other info that explains *why* we are doing this, what problems it causes if we don't, etc? Sorry for this inconvenient. The second and third sepgsql patch try to add permission checks on the places being not covered by sepgsql, even though we defined related permissions; search of db_schema and execute of db_procedure. It is unavailable to control user's access towards these objects from perspective of selinux policy, without these patches, even though existing DAC mechanism well controls. Right now, sepgsql applies no checks when users tried to lookup an object name underlying a particular schema. It is inconvenient to set up an environment that enforces per-domain namespace according to the selinux basis, because current sepgsql ignores searching schema, thus, it means all the schema is allowed to search from viewpoint of selinux. What we want to do is almost same as existing permission checks are doing when users tried to search a particular schema, except for its criteria to make access control decision. Also, sepgsql applies no checks when users tries to execute functions, right now. It makes unavailable to control execution of functions from viewpoint of selinux, and here is no way selinux to prevent to execute functions defined by other domains, or others being not permitted. Also, what we want to do is almost same as existing permission checks, except for its criteria to make access control decision. SELinux model requires client domain has db_schema:{search} permission when it tries to search its underlying objects, and client domain has db_procedure:{execute} permission when it tries to execute the function and db_procedure:{entrypoint} additionally if this execution performs as entrypoint of trusted- procedures. These are the problems behind of these patches. In short, I'd like to give sepgsql configuration more flexibility as existing DAC doing. That helps use cases of typical SaaS applications that shares a database with multiple clients but to be separated each other. It is the motivation why I'd like to add these permissions here. Thanks, -- KaiGai Kohei kai...@kaigai.gr.jp -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Hm, table constraints aren't so unique as all that
* Tom Lane (t...@sss.pgh.pa.us) wrote: Peter Geoghegan peter.geoghega...@gmail.com writes: I can see the case for fixing this, but I don't feel that it's particularly important that constraints be uniquely identifiable from the proposed new errdata fields. I think that we'll soon be buried in gripes if they're not. Pretty much the whole point of this patch is to allow applications to get rid of ad-hoc, it-usually-works coding techniques. I'd argue that not checking the entire constraint identity is about as fragile as trying to sed the constraint name out of a potentially-localized error message. In both cases, it often works fine, until the application's context changes. Perhaps I wasn't clear previously, but this is precisely what I had been argueing for upthread.. Thanks, Stephen signature.asc Description: Digital signature
Re: [HACKERS] [sepgsql 2/3] Add db_schema:search permission checks
On 29 January 2013 13:30, Kohei KaiGai kai...@kaigai.gr.jp wrote: It makes unavailable to control execution of functions from viewpoint of selinux, and here is no way selinux to prevent to execute functions defined by other domains, or others being not permitted. Also, what we want to do is almost same as existing permission checks, except for its criteria to make access control decision. Do you have a roadmap of all the things this relates to? If selinux has a viewpoint, I'd like to be able to see a list of capabilities and then which ones are currently missing. I guess I'm looking for external assurance that someone somewhere needs this and that it fits into a complete overall plan of what we should do. Just like we are able to use SQLStandard as a guide as to what we need to implement, we would like something to refer back to. Does this have a request id, specification document page number or whatever? -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation
On Tuesday, January 29, 2013 3:53 PM Heikki Linnakangas wrote: On 29.01.2013 11:58, Amit Kapila wrote: Can there be another way with which current patch code can be made better, so that we don't need to change the encoding approach, as I am having feeling that this might not be performance wise equally good. The point is that I don't want to heap_delta_encode() to know the internals of pglz compression. You could probably make my patch more like yours in behavior by also passing an array of offsets in the new tuple to check, and only checking for matches as those offsets. I think it makes sense, because if we have offsets of both new and old tuple, we can internally use memcmp to compare columns and use same algorithm for encoding. I will change the patch according to this suggestion. With Regards, Amit Kapila. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] enhanced error fields
On 1/28/13 11:08 PM, Tom Lane wrote: The issue is that this definition presupposes that we want to complain about a table or a domain, never both, because we're overloading both the SCHEMA_NAME and CONSTRAINT_NAME fields for both purposes. This is annoying in validateDomainConstraint(), where we know the domain constraint that we're complaining about and also the table/column containing the bad value. We can't fill in both TABLE_NAME and DATATYPE_NAME because they both want to set SCHEMA_NAME, and perhaps not to the same value. I think any error should only complain about one object, in this case the domain. The table, in this case, is more like a context stack item. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [sepgsql 2/3] Add db_schema:search permission checks
2013/1/29 Simon Riggs si...@2ndquadrant.com: On 29 January 2013 13:30, Kohei KaiGai kai...@kaigai.gr.jp wrote: It makes unavailable to control execution of functions from viewpoint of selinux, and here is no way selinux to prevent to execute functions defined by other domains, or others being not permitted. Also, what we want to do is almost same as existing permission checks, except for its criteria to make access control decision. Do you have a roadmap of all the things this relates to? If selinux has a viewpoint, I'd like to be able to see a list of capabilities and then which ones are currently missing. I guess I'm looking for external assurance that someone somewhere needs this and that it fits into a complete overall plan of what we should do. Just like we are able to use SQLStandard as a guide as to what we need to implement, we would like something to refer back to. Does this have a request id, specification document page number or whatever? I previously made several wiki pages for reference of permissions to be checked, but it needs maintenance works towards the latest state, such as newly added permissions. http://wiki.postgresql.org/wiki/SEPostgreSQL_References Even though selinuxproject.org hosts permission list, it is more rough than what I described at wiki.postgresql.org. http://www.selinuxproject.org/page/ObjectClassesPerms#Database_Object_Classes Unlike SQL standard, we have less resource to document its spec being validated by third persons. However, it is a reasonable solution to write up which permission shall be checked on which timing. Let me revise the above wikipage to show my overall plan. Thanks, -- KaiGai Kohei kai...@kaigai.gr.jp -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] BUG #7493: Postmaster messages unreadable in a Windows console
Alexander Law exclus...@gmail.com writes: Please look at the following l10n bug: http://www.postgresql.org/message-id/502a26f1.6010...@gmail.com and the proposed patch. That patch looks entirely unsafe to me. Neither of those functions should be expected to be able to run when none of our standard infrastructure (palloc, elog) is up yet. Possibly it would be safe to do this somewhere around where we do GUC initialization. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [sepgsql 2/3] Add db_schema:search permission checks
On 29 January 2013 14:39, Kohei KaiGai kai...@kaigai.gr.jp wrote: 2013/1/29 Simon Riggs si...@2ndquadrant.com: On 29 January 2013 13:30, Kohei KaiGai kai...@kaigai.gr.jp wrote: It makes unavailable to control execution of functions from viewpoint of selinux, and here is no way selinux to prevent to execute functions defined by other domains, or others being not permitted. Also, what we want to do is almost same as existing permission checks, except for its criteria to make access control decision. Do you have a roadmap of all the things this relates to? If selinux has a viewpoint, I'd like to be able to see a list of capabilities and then which ones are currently missing. I guess I'm looking for external assurance that someone somewhere needs this and that it fits into a complete overall plan of what we should do. Just like we are able to use SQLStandard as a guide as to what we need to implement, we would like something to refer back to. Does this have a request id, specification document page number or whatever? I previously made several wiki pages for reference of permissions to be checked, but it needs maintenance works towards the latest state, such as newly added permissions. http://wiki.postgresql.org/wiki/SEPostgreSQL_References Even though selinuxproject.org hosts permission list, it is more rough than what I described at wiki.postgresql.org. http://www.selinuxproject.org/page/ObjectClassesPerms#Database_Object_Classes Unlike SQL standard, we have less resource to document its spec being validated by third persons. However, it is a reasonable solution to write up which permission shall be checked on which timing. Let me revise the above wikipage to show my overall plan. OK, that's looking like a good and useful set of info. What we need to do is to give the SELinux API a spec/version number (yes, the SELinux one) and then match what PostgreSQL implements against that, so we can say we are moving towards spec compliance with 1.0 and we have a list of unimplemented features... That puts this in a proper context, so we know what we are doing, why we are doing it and also when we've finished it. And also, how to know what future external changes will cause additional work. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] psql \l to accept patterns
Hi, I have tried this patch. https://commitfest.postgresql.org/action/patch_view?id=1051 2013/01/29 14:48, Peter Eisentraut wrote: On Mon, 2013-01-07 at 07:14 -0500, Peter Eisentraut wrote: Here is a patch for psql's \l command to accept patterns, like \d commands do. While at it, I also added an S option to show system objects and removed system objects from the default display. This might be a bit controversial, but it's how it was decided some time ago that the \d commands should act. Most people didn't like the S option, so here is a revised patch that just adds the pattern support. It seems working well with the latest git master. I think it's good enough to be committed. BTW, is there any good place to put new regression test for the psql command? I couldn't find it out. Any comment or suggestion? Regards, -- Satoshi Nagayasu sn...@uptime.jp Uptime Technologies, LLC. http://www.uptime.jp -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] psql \l to accept patterns
Satoshi Nagayasu sn...@uptime.jp writes: On Mon, 2013-01-07 at 07:14 -0500, Peter Eisentraut wrote: Here is a patch for psql's \l command to accept patterns, like \d BTW, is there any good place to put new regression test for the psql command? I couldn't find it out. As far as a test for this specific feature goes, I'd be against adding one, because it'd be likely to result in random failures in make installcheck mode, depending on what other databases are in the installation. More generally, we've tended to put tests of \d-style psql features together with the relevant backend-side tests. The proposed patch to add \gset adds a separate regression test file specifically for psql. I've been debating whether that was worth committing; but if there's near-term interest in adding any more tests for psql features that aren't closely connected to backend features, maybe it's worth having such a file. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] missing rename support
Please find attached the complete patch for alter rename rule. I have followed all the suggestions. Followings things are added in this updated patch: 1) Disallow alter rename of ON SELECT rules. 2) Remove warning. 3) Varibles are lined up. 4) Used qualified_name instead of makeRangeVarFromAnyName. 5) Throw appropriate error if user tries to alter rename rule on irrelavent object(e.g index). 6) Psql tab support added 7) Regression test cases added. 8) Documentation added. Regards, Ali Dar On Mon, Jan 21, 2013 at 12:34 AM, Dean Rasheed dean.a.rash...@gmail.comwrote: On 3 January 2013 13:49, Ali Dar ali.munir@gmail.com wrote: Find attached an initial patch for ALTER RENAME RULE feature. Please note that it does not have any documentation yet. Hi, I just got round to looking at this. All-in-all it looks OK. I just have a few more review comments, in addition to Stephen's comment about renaming SELECT rules... This compiler warning should be fixed with another #include: alter.c:107:4: warning: implicit declaration of function ‘RenameRewriteRule’ In gram.y, I think you can just use qualified_name instead of makeRangeVarFromAnyName(). In RenameRewriteRule(), I think it's worth doing a check to ensure that the relation actually is a table or a view (you might have some other relation kind at that point in the code). If the user accidentally types the name of an index, say, instead of a table, then it is better to throw an error saying xxx is not a table or a view rather than reporting that the rule doesn't exist. I think this could probably use some simple regression tests to test both the success and failure cases. It would be nice to extend psql tab completion to support this too, although perhaps that could be done as a separate patch. Don't forget the docs! Regards, Dean alter-rule-rename_complete.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Back-branch update releases coming in a couple weeks
On Sun, Jan 27, 2013 at 11:38 PM, MauMau maumau...@gmail.com wrote: From: Fujii Masao masao.fu...@gmail.com On Sun, Jan 27, 2013 at 12:17 AM, MauMau maumau...@gmail.com wrote: Although you said the fix will solve my problem, I don't feel it will. The discussion is about the crash when the standby restarts after the primary vacuums and truncates a table. On the other hand, in my case, the standby crashed during failover (not at restart), emitting a message that some WAL record refers to an uninitialized page (not a non-existent page) of an index (not a table). In addition, fujii_test.sh did not reproduce the mentioned crash on PostgreSQL 9.1.6. I'm sorry to cause you trouble, but could you elaborate on how the fix relates to my case? Maybe I had not been understanding your problem correctly. Could you show the self-contained test case which reproduces the problem? Is the problem still reproducible in REL9_1_STABLE? As I said before, it's very hard to reproduce the problem. All what I did is to repeat the following sequence: 1. run pg_ctl stop -mi against the primary while the applications were performing INSERT/UPDATE/SELECT. 2. run pg_ctl promote against the standby of synchronous streaming standby. 3. run pg_basebackup on the stopped (original) primary to create a new standby, and start the new standby. I did this failover test dozens of times, probably more than a hundred. And I encountered the crash only once. Although I saw the problem only once, the result is catastrophic. So, I really wish Heiki's patch (in cooperation with Horiguchi-san and you) could fix the issue. Do you think of anything? Umm... it's hard to tell whether your problem has been fixed in the latest 9.1, from that information. The bug fix which you mentioned consists of two patches. http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=7bffc9b7bf9e09ddeddc65117e49829f758e500d http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=970fb12de121941939e64d6e0446c974bba3 The former seems not to be related to your problem because the problem that patch fixed could basically happen only when restarting the standby. The latter might be related to your problem Regards, -- Fujii Masao -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_ctl idempotent option
Bruce Momjian br...@momjian.us writes: pg_upgrade uses that to find out of the server was already running or if we started it. This is to start the server to remove the postmaster.pid file. Also, no one has explained how not knowing if -o options were used was a safe. What happened to the plan for pg_upgrade to use a new standalone facility that also allows to run a normal psql in single-user mode? IIRC the only thing we didn't want out of that patch was to market the feature as an embedded mode of operations, because it's not, and some level of faith that it's not blocking any future development of a proper embedded mode. Baring that, using the feature for pg_upgrade makes so much sense that I'm left wondering why we're even still having the poor script trying to decipher so much situations that the postmaster itself has no problem dealing with. Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] pg_isready (was: [WIP] pg_ping utility)
On Tue, Jan 29, 2013 at 3:12 AM, Alvaro Herrera alvhe...@2ndquadrant.com wrote: Phil Sorber escribió: On Mon, Jan 28, 2013 at 11:20 AM, Fujii Masao masao.fu...@gmail.com wrote: Maybe. But I'm not inclined to add new libpq interface at this stage. Because we are in the last CommitFest and I'm not sure whether we have enough time to implement that. Instead, how about using both PQconninfoParse() and PQconndefaults() like uri-regress.c does? Or just remove that output message? At least I don't think that the information about host and port needs to be output. Which would make the code very simpler. I think that output is important as do others up thread. I think it'd be simpler to just disable dbname's ability to double as conninfo. If we are looking for easy, that is. I don't mind duplicating the behavior from conninfo_array_parse() or uri-regress.c either. I can just put a comment that at some point in the future we should add a libpq interface for it. I suggest duplicate the code for 9.3, and submit a patch to refactor into a new libpq function for CF2013-next. If the patch is simple enough, we can consider putting it into 9.3. Agreed. Regards, -- Fujii Masao -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] enhanced error fields
2013/1/27 Tom Lane t...@sss.pgh.pa.us: Peter Geoghegan peter.geoghega...@gmail.com writes: On 26 January 2013 22:36, Tom Lane t...@sss.pgh.pa.us wrote: BTW, one thing that struck me in a quick look-through is that the ERRCODE_FOREIGN_KEY_VIOLATION patches seem to inconsistently send either the PK or FK rel as the errtable. Is this really per spec? I'd have sort of expected that the reported table ought to be the one that the constraint belongs to, namely the FK table. Personally, on the face of it I'd expect the inconsistency to simply reflect the fact that the error related to the referencing table or referenced table. I looked in the spec a bit, and what I found seems to support my recollection about this. In SQL99, it's 19.1 get diagnostics statement that defines the usage of these fields, and I see f) If the value of RETURNED_SQLSTATE corresponds to integrity constraint violation, transaction rollback - integrity constraint violation, or a triggered data change violation that was caused by a violation of a referential constraint, then: i) The values of CONSTRAINT_CATALOG and CONSTRAINT_SCHEMA are the catalog name and the unqualified schema name of the schema name of the schema containing the constraint or assertion. The value of CONSTRAINT_NAME is the qualified identifier of the constraint or assertion. ii) Case: 1) If the violated integrity constraint is a table constraint, then the values of CATALOG_NAME, SCHEMA_ NAME, and TABLE_NAME are the catalog name, the unqualified schema name of the schema name, and the qualified identifier or local table name, respectively, of the table in which the table constraint is contained. The notion of a constraint being contained in a table is a bit weird; I guess they mean contained in the table's schema description. Anyway it seems fairly clear to me that it's supposed to be the table that the constraint belongs to, and that has to be the FK table not the PK table. +1 regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Event Triggers: adding information
On Mon, Jan 28, 2013 at 6:19 PM, Dimitri Fontaine dimi...@2ndquadrant.fr wrote: Christopher Browne cbbro...@gmail.com writes: I'm poking at event triggers a bit; would like to set up some examples (and see if they work, or break down badly; both are interesting results) to do some validation of schema for Slony. Cool, thanks! What I'm basically thinking about is to set up some event triggers that run on DROP TABLE / DROP SEQUENCE, and see about cleaning up the replication side of things (e.g. - inject a request to drop the table/sequence from replication). Sure. In what got commited from the current patch series, you will only know that a DROP TABLE (or DROP SEQUENCE) occured, and we're trying to get to an agreement with Robert if we should prefer to add visibility to such events that occurs in a CASCADE statement or rather add the OID (and maybe the name) of the Object that's going to be dropped. Your opinion is worth a lot on that matter, if you have one to share! :) Hmm. I think some information about the object is pretty needful. For the immediate case I'm poking at, namely looking for dropped tables,I could determine that which object is gone by inference; if I run the trigger as part of the ddl_command_end event, then I could run a query that searches the slony table sl_table, and if I find any tables for which there is no longer a corresponding table in pg_catalog.pg_class, then I infer which table got dropped. But I think I'd really rather know more explicitly which table is being dropped. Having the oid available in some trigger variable should suffice. It appears to me as though it's relevant to return an OID for all of the command tags. Something useful to clarify in the documentation is what differences are meaningful between ddl_command_start and ddl_command_end to make it easier to determine which event one would most want to use. Musing a bit... It seems to me that it might be a slick idea to run a trigger at both _start and _end, capturing metadata about the object into temp tables at both times, which would then allow the _end function to compare the data in the temp table to figure out what to do next. I wouldn't think that's apropos as default behaviour; that's something for the crafty developer that's building a trigger function to do. Having a parse tree for the query that initiates the event would be mighty useful, as would be a canonicalized form of the query. I think we could add some useful protection (e.g. - such as my example of an event trigger that generates DROP TABLE FROM REPLICATION) using the present functionality, even perhaps without OIDs, but I don't think I'd want to get into trying to forward arbitrary DDL without having the canonicalized query available. I have a bit of a complaint as to what documentation is included; I don't see any references in the documentation to ddl_command_start / ddl_command_end, which seem to be necessary values for event triggers. What we have now here: http://www.postgresql.org/docs/devel/static/event-trigger-matrix.html http://www.postgresql.org/docs/devel/static/sql-createeventtrigger.html http://www.postgresql.org/docs/devel/static/plpgsql-trigger.html#PLPGSQL-EVENT-TRIGGER Is it not visible enough, or really missing the point? Ah, I missed the second one; I was looking under CREATE TRIGGER, didn't notice that CREATE EVENT TRIGGER was separately available; that resolves most of what I thought was missing. I think a bit more needs to be said about the meanings of the events and the command tags, but what I imagined missing wasn't. I'd tend to think that there should be a new subsection in the man page for CREATE TRIGGER that includes at least two fully formed examples of event triggers, involving the two events in question. Is change of that sort in progress? The event triggers are addressed in a whole new chapter of the docs, maybe that's why you didn't find the docs? I found that chapter, just not the command :-). -- When confronted by a difficult problem, solve it by reducing it to the question, How would the Lone Ranger handle this? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] enhanced error fields
2013/1/28 Peter Geoghegan peter.geoghega...@gmail.com: On 28 January 2013 21:33, Peter Eisentraut pete...@gmx.net wrote: Another point, in case someone wants to revisit this in the future, is that these fields were applied in a way that is contrary to the SQL standard, I think. The presented patch interpreted ROUTINE_NAME as: the error happened while executing this function. But according to the standard, the field is only set when the error was directly related to the function itself, for example when calling an INSERT statement in a non-volatile function. Right. It seems to me that ROUTINE_NAME is vastly less compelling than the fields that are likely to be present in the committed patch. GET DIAGNOSTICS, as implemented by DB2, allows clients /to poll/ for a large number of fields. I'm not really interested in that myelf, but if we were to add something in the same spirit, I think that extending errdata to support this would not be a sensible approach. Perhaps I'm mistaken, but I can't imagine that it would be terribly useful to anyone (including Pavel) to have a GET DIAGNOSTICS style ROUTINE_NAME. I hoped so I can use it inside exception handler Regards Pavel -- Regards, Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] enhanced error fields
On 29 January 2013 17:05, Pavel Stehule pavel.steh...@gmail.com wrote: Perhaps I'm mistaken, but I can't imagine that it would be terribly useful to anyone (including Pavel) to have a GET DIAGNOSTICS style ROUTINE_NAME. I hoped so I can use it inside exception handler Right, but is that really any use to you if it becomes available for a small subset of errors, specifically, errors that directly relate to the function? You're not going to be able to use it to trace the function where an arbitrary error occurred, if we do something consistent with GET DIAGNOSTICS as described by the SQL standard, it seems. I think that what the SQL standard intends here is actually consistent with what we're going to do with CONSTRAINT_NAME and so on. I just happen to think it's much less interesting, but am not opposed to it in principle (though I may oppose it in practice, if writing the feature means bloating up errdata). -- Regards, Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: proposal: a width specification for s specifier (format function), fix behave when positional and ordered placeholders are used
2013/1/28 Tom Lane t...@sss.pgh.pa.us: Pavel Stehule pavel.steh...@gmail.com writes: 2013/1/28 Dean Rasheed dean.a.rash...@gmail.com: Starting with the first patch - it issues a new WARNING if the format string contains a mixture of format specifiers with and without parameter indexes (e.g., 'Hello %s, %1$s'). Having thought about it a bit, I really don't like this for a number of reasons: I am not sure what you dislike? warnings or redesign of behave. Both. If we had done this when we first implemented format(), it'd be fine, but it's too late to change it now. There very likely are applications out there that depend on the current behavior. As Dean says, it's not incompatible with SUS, just a superset, so ISTM this patch is proposing to remove documented functionality --- for no very strong reason. I disagree - but I have not a arguments. I am thinking so current implementation is wrong, and now is last time when we can to fix it - format() function is not too old and there is relative chance to minimal impact to users. I didn't propose removing this functionality, but fixing via more logical independent counter for ordered arguments. Dependency on previous positional argument is unpractical and unclean. I am not satisfied so it is undefined and then it is ok. Regards Pavel I vote for rejecting this change entirely. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: proposal: a width specification for s specifier (format function), fix behave when positional and ordered placeholders are used
2013/1/28 Tom Lane t...@sss.pgh.pa.us: Dean Rasheed dean.a.rash...@gmail.com writes: On 28 January 2013 20:40, Pavel Stehule pavel.steh...@gmail.com wrote: 2013/1/28 Dean Rasheed dean.a.rash...@gmail.com: flags - not currently implemented. Pavel's second patch adds support for the '-' flag for left justified string output. However, I think this should support all datatypes (i.e., %I and %L as well as %s). no - surely not - I% and L% is PostgreSQL extension and left or right alignment is has no sense for PostgreSQL identifiers and PostgreSQL literals. Left/right alignment and padding in printf() apply to all types, after the data value is converted to a string. Why shouldn't that same principle apply to %I and %L? I agree with Dean --- it would be very strange for these features not to apply to all conversion specifiers (excepting %% of course, which isn't really a conversion specifier but an escaping hack). ok - I have no problem with it - after some thinking - just remove one check. Regards Pavel regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: proposal: a width specification for s specifier (format function), fix behave when positional and ordered placeholders are used
2013/1/29 Dean Rasheed dean.a.rash...@gmail.com: On 28 January 2013 20:32, Dean Rasheed dean.a.rash...@gmail.com wrote: In general a format specifier looks like: %[parameter][flags][width][.precision][length]type This highlights another problem with the current implementation --- the '-' flag and the width field need to be parsed separately. So '%-3s' should be parsed as a '-' flag followed by a width of 3, not as a width of -3, which is then interpreted as left-aligned. This might seem like nitpicking, but actually it is important: * In the future we might support more flags, and they can be specified in any order, so the '-' flag won't necessarily come immediately before the width. * The width field is optional, even if the '-' flag is specified. So '%-s' is perfectly legal and should be interpreted as '%s'. The current implementation treats it as a width of 0, which is wrong. * The width field might not be a number, it might be something like * or *3$. Note that the SUS allows a negative width to be passed in as a function argument using this syntax, in which case it should be treated as if the '-' flag were specified. A possibility to specify width as * can be implemented in future. The format() function was not designed to be fully compatible with SUS - it is simplified subset with pg enhancing. There was a talks about integration to_char() formats to format() and we didn't block it - and it was reason why I proposed and pushed name format and not printf, because there can be little bit different purposes than generic printf function. Regards Pavel Regards, Dean -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] enhanced error fields
2013/1/29 Peter Geoghegan peter.geoghega...@gmail.com: On 29 January 2013 17:05, Pavel Stehule pavel.steh...@gmail.com wrote: Perhaps I'm mistaken, but I can't imagine that it would be terribly useful to anyone (including Pavel) to have a GET DIAGNOSTICS style ROUTINE_NAME. I hoped so I can use it inside exception handler Right, but is that really any use to you if it becomes available for a small subset of errors, specifically, errors that directly relate to the function? You're not going to be able to use it to trace the function where an arbitrary error occurred, if we do something consistent with GET DIAGNOSTICS as described by the SQL standard, it seems. in this meaning is not too useful as I expected. I think that what the SQL standard intends here is actually consistent with what we're going to do with CONSTRAINT_NAME and so on. I just happen to think it's much less interesting, but am not opposed to it in principle (though I may oppose it in practice, if writing the feature means bloating up errdata). I checked performance and Robert too, and there is not significant slowdown. if I do DROP FUNCTION inner(int); DROP FUNCTION middle(int); DROP FUNCTION outer(); CREATE OR REPLACE FUNCTION inner(int) RETURNS int AS $$ BEGIN RETURN 10/$1; END; $$ LANGUAGE plpgsql; CREATE OR REPLACE FUNCTION middle(int) RETURNS int AS $$ BEGIN RETURN inner($1); END; $$ LANGUAGE plpgsql; CREATE OR REPLACE FUNCTION outer() RETURNS int AS $$ DECLARE text_var1 text; BEGIN RETURN middle(0); EXCEPTION WHEN OTHERS THEN GET STACKED DIAGNOSTICS text_var1 = PG_EXCEPTION_CONTEXT; RAISE NOTICE '%', text_var1; RETURN -1; END; $$ LANGUAGE plpgsql; SELECT outer(); then output is psql:test.psql:34: NOTICE: PL/pgSQL function inner(integer) line 3 at RETURN PL/pgSQL function middle(integer) line 3 at RETURN PL/pgSQL function outer() line 5 at RETURN I have not any possibility to take information about source of exception without parsing context - and a string with context information can be changed, so it is not immutable and not easy accessible. Why I need this info - sometimes when I can log some info about handled exception I don't would log a complete context due size and due readability. Another idea - some adjusted parser of context message can live in GET STACKED DIAGNOSTICS implementation - so there can be some custom field (just for GET STACKED DIAG.) that returns expected value. But this value should be taken from context string with parsing - that is not nice - but possible - I did similar game in orafce. Regards Pavel -- Regards, Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] enhanced error fields
I wrote: Rather what we've got is that constraints are uniquely named among those associated with a table, or with a domain. So the correct unique key for a table constraint is table schema + table name + constraint name, whereas for a domain constraint it's domain schema + domain name + constraint name. The current patch provides sufficient information to uniquely identify a table constraint, but not so much domain constraints. Should we fix that? I think it'd be legitimate to re-use SCHEMA_NAME for domain schema, but we'd need a new nonstandard field DOMAIN_NAME (or maybe better DATATYPE_NAME) if we want to fix it. Here's an updated patch (code only, sans documentation) that fixes that and adds some other refactoring that I thought made for improvements. I think this is ready to commit except for the documentation. I eventually concluded that the two ALTER DOMAIN call sites in typecmds.c should report the table/column name (with errtablecol), even though in principle the auxiliary info ought to be about the domain. The reason is that the name of the domain is nearly useless here --- you probably know it already, if you're issuing an ALTER for it. In fact, we don't even bother to provide the name of the domain at all in either human-readable error message. On the other hand, the location of the offending value could be useful info. So I think usefulness should trump principle there. regards, tom lane eelog7.patch.gz Description: eelog7.patch.gz -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [COMMITTERS] pgsql: Tolerate timeline switches while pg_basebackup -X fetch is run
Heikki Linnakangas wrote: Tolerate timeline switches while pg_basebackup -X fetch is running. I just noticed that this commit introduced a few error messages that have a file argument which is not properly quoted: + ereport(ERROR, + (errcode_for_file_access(), +errmsg(requested WAL segment %s has already been removed, + filename))); + ereport(ERROR, + (errmsg(could not find WAL file %s, startfname))); The first one seems to come from e57cd7f0a16, which is pretty old so it's a bit strange that no one noticed. Not sure what to do here ... should we just update everything including the back branches, or just leave them alone and touch master only? -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] erroneous restore into pg_catalog schema
Robert Haas escribió: On Tue, Jan 15, 2013 at 3:22 PM, Tom Lane t...@sss.pgh.pa.us wrote: Robert Haas robertmh...@gmail.com writes: Or perhaps there is some other way to make sure that the user really meant it, like refusing to create in pg_catalog unless the schema name is given explicitly. I kind of like that idea, actually. That does seem attractive at first glance. Did you have an implementation in mind? The idea that comes to mind for me is to hack namespace.c, either to prevent activeCreationNamespace from getting set to pg_catalog in the first place, or to throw error in LookupCreationNamespace and friends. I am not sure though if LookupCreationNamespace et al ever get called in contexts where no immediate object creation is intended (and thus maybe an error wouldn't be appropriate). As far as I can see, the principle place we'd want to hack would be recomputeNamespacePath(), so that activeCreationNamespace never ends up pointing to pg_catalog even if that's explicitly listed in search_path. The places where we actually work out what schema to use are RangeVarGetCreationNamespace() and QualifiedNameGetCreationNamespace(), but those don't seem like they'd need any adjustment, unless perhaps we wish to whack around the no schema has been selected to create in error message in some way. Robert, are you working on this? -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_ctl idempotent option
On 1/28/13 9:29 PM, Bruce Momjian wrote: pg_upgrade uses that to find out of the server was already running or if we started it. This is to start the server to remove the postmaster.pid file. It's currently a bit missed up anyway. pg_ctl start is successful if the server is already started, but pg_ctl -w start fails. What pg_upgrade is doing doesn't sound particularly safe, for example when something is concurrently starting or stopping the server. Also, no one has explained how not knowing if -o options were used was a safe. Hmm, good point. But we already have this problem -- see above. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_ctl idempotent option
On Tue, Jan 29, 2013 at 04:34:50PM -0500, Peter Eisentraut wrote: On 1/28/13 9:29 PM, Bruce Momjian wrote: pg_upgrade uses that to find out of the server was already running or if we started it. This is to start the server to remove the postmaster.pid file. It's currently a bit missed up anyway. pg_ctl start is successful if the server is already started, but pg_ctl -w start fails. Yeah, that is odd: # pg_ctl start pg_ctl: another server might be running; trying to start server anyway server starting # FATAL: lock file postmaster.pid already exists HINT: Is another postmaster (PID 14144) running in data directory /u/pgsql/data? # echo $? 0 # pg_ctl -w start pg_ctl: another server might be running; trying to start server anyway waiting for server to startFATAL: lock file postmaster.pid already exists HINT: Is another postmaster (PID 14144) running in data directory /u/pgsql/data? pg_ctl: this data directory appears to be running a pre-existing postmaster stopped waiting pg_ctl: could not start server Examine the log output. # echo $? 1 It is because pg_ctl without -w doesn't want to see if the start was successful. Fortunately, pg_upgrade always uses -w. What pg_upgrade is doing doesn't sound particularly safe, for example when something is concurrently starting or stopping the server. Yes, there is always the risk of someone starting the server while it is down during pg_upgrade; we assume the user has control of others starting the server during pg_upgrade. Also, no one has explained how not knowing if -o options were used was a safe. Hmm, good point. But we already have this problem -- see above. Yes, also true. I guess I can only stay it works for -w. :-( -- Bruce Momjian br...@momjian.ushttp://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] enhanced error fields
I wrote: Here's an updated patch (code only, sans documentation) that fixes that and adds some other refactoring that I thought made for improvements. I think this is ready to commit except for the documentation. Pushed with documentation. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] erroneous restore into pg_catalog schema
On Tue, Jan 29, 2013 at 2:30 PM, Alvaro Herrera alvhe...@2ndquadrant.com wrote: Robert Haas escribió: On Tue, Jan 15, 2013 at 3:22 PM, Tom Lane t...@sss.pgh.pa.us wrote: Robert Haas robertmh...@gmail.com writes: Or perhaps there is some other way to make sure that the user really meant it, like refusing to create in pg_catalog unless the schema name is given explicitly. I kind of like that idea, actually. That does seem attractive at first glance. Did you have an implementation in mind? The idea that comes to mind for me is to hack namespace.c, either to prevent activeCreationNamespace from getting set to pg_catalog in the first place, or to throw error in LookupCreationNamespace and friends. I am not sure though if LookupCreationNamespace et al ever get called in contexts where no immediate object creation is intended (and thus maybe an error wouldn't be appropriate). As far as I can see, the principle place we'd want to hack would be recomputeNamespacePath(), so that activeCreationNamespace never ends up pointing to pg_catalog even if that's explicitly listed in search_path. The places where we actually work out what schema to use are RangeVarGetCreationNamespace() and QualifiedNameGetCreationNamespace(), but those don't seem like they'd need any adjustment, unless perhaps we wish to whack around the no schema has been selected to create in error message in some way. Robert, are you working on this? I wasn't, but I can, if we agree on it. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Event Triggers: adding information
Christopher Browne cbbro...@gmail.com writes: Hmm. I think some information about the object is pretty needful. For the immediate case I'm poking at, namely looking for dropped tables,I could determine that which object is gone by inference; if I run the trigger as part of the ddl_command_end event, then I could run a query that searches the slony table sl_table, and if I find any tables for which there is no longer a corresponding table in pg_catalog.pg_class, then I infer which table got dropped. But I think I'd really rather know more explicitly which table is being dropped. Having the oid available in some trigger variable should suffice. Ack. I think I'm going to send a patch tomorrow with support for exposing the OID of the object (when at the time of firing the event trigger it still exists) and support for DROP CASCADE objects thanks to a SRF that you will be able to call from your event trigger function. It appears to me as though it's relevant to return an OID for all of the command tags. Yeah, I've been working on that before and the refactoring in utility.c is already commited by Robert, so that's definitely in-scope. Something useful to clarify in the documentation is what differences are meaningful between ddl_command_start and ddl_command_end to make it easier to determine which event one would most want to use. Will add. Musing a bit... It seems to me that it might be a slick idea to run a trigger at both _start and _end, capturing metadata about the object into temp tables at both times, which would then allow the _end function to compare the data in the temp table to figure out what to do next. I wouldn't think that's apropos as default behaviour; that's something for the crafty developer that's building a trigger function to do. Please remember that at the _end of a DROP the object no longer exists in the catalogs, so you will get a NULL for its OID, and same at _start of a CREATE command. For implementation reasons, same as CREATE for the ALTER case. Having a parse tree for the query that initiates the event would be mighty useful, as would be a canonicalized form of the query. I think we could add some useful protection (e.g. - such as my example of an event trigger that generates DROP TABLE FROM REPLICATION) using the present functionality, even perhaps without OIDs, but I don't think I'd want to get into trying to forward arbitrary DDL without having the canonicalized query available. Thanks. As soon as the OID+CASCADE patch is done, I'm back on working on Command String Normalisation. I think a bit more needs to be said about the meanings of the events and the command tags, but what I imagined missing wasn't. Definitely, another pass on the docs will be needed here. I'd like to be able to postpone that to beta times, as I'm arranging my schedule so as to be available and participating then. Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] erroneous restore into pg_catalog schema
Robert Haas robertmh...@gmail.com writes: On Tue, Jan 29, 2013 at 2:30 PM, Alvaro Herrera Robert, are you working on this? I wasn't, but I can, if we agree on it. I think we need to do *something* (and accordingly have added this to the 9.3 open items page so we don't forget about it). Whether Robert's idea is the best one probably depends in part on how clean the patch turns out to be. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Should pg_dump dump larger tables first?
All, It's perhaps not the ideal time for a discussion but if I thought it would turn into a long discussion then I'd probably not post this due to the current timing in the release cycle. This is something I thought of while doing a restore on a 40ish GB database which has a few hundred smallish tables of various sizes up to about 1.5 million records, then a handful of larger tables containing 20-70 million records. During the restore (which was running 4 separate jobs), I was polling SELECT query FROM pg_Stat_activity to find out the progress of the restore. I noticed that there was now less than 4 jobs running and pg_restore was busy doing COPY into some of the 20-70 million record tables. If pg_dump was to still follow the dependencies of objects, would there be any reason why it shouldn't backup larger tables first? This should then allow pg_restore to balance the smaller tables around separate jobs at the end of the restore instead of having CPUs sitting idle while say 1 job is busy on a big table. Of course this would not improve things for all work loads, but I hardly think that a database with a high number of smallish tables and a small number of large tables is unusual. If there was consensus that it might be a good idea to craft up a patch to test if this is worth it then I'd be willing to give it a go. Some of the things I thought about but did not have an answer for: 1. Would it be enough just check the number of blocks in each relation or would it be better to look at the statistics to estimate the size of the when it's restored minus the dead tuples. 2. Would it be a good idea to add an extra pg_dump option for this or just make it the default for all dumps that contain table data? Any thoughts on this are welcome. Regards David Rowley -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] in-catalog Extension Scripts and Control parameters (templates?)
Hi, Please find attached v2 of the Extension Templates patch, with pg_dump support and assorted fixes. It's still missing ALTER RENAME and OWNER facilities, and owner in the dump. There's a design point I want to address with some input before getting there, though. Hence this email. Dimitri Fontaine dimi...@2ndquadrant.fr writes: We now have those new catalogs: - pg_extension_control - pg_extension_template - pg_extension_uptmpl What I did here in pg_dump is adding a new dumpable object type DO_EXTENSION_TEMPLATE where in fact we're fetching entries from pg_extension_control and pg_extension_template and uptmpl. The thing is that we now have a control entry for any script to play, so that we can ALTER the control properties of any known target version. Also, an extension installed from a template keeps a dependency towards the control entry of that template, so that the dump is done with the right ordering. Now, the tricky part that's left over. Say that you have an extension pair with 3 versions available, and those upgrade paths (edited for brevity): ~# select * from pg_extension_update_paths('pair'); source | target | path ++--- 1.0| 1.1| 1.0--1.1 1.0| 1.2| 1.0--1.1--1.2 1.1| 1.2| 1.1--1.2 CREATE EXTENSION pair VERSION '1.2'; PostgreSQL didn't know how to do that before, and still does not. That feature is implemented in another patch of mine for 9.3, quietly waiting for attention to get back to it, and answering to a gripe initially expressed by Robert: https://commitfest.postgresql.org/action/patch_view?id=968 Given the ability to install an extension from a default_version then apply the update path to what the user asked, we would have been able to ship hstore 1.0 and 1.0--1.1 script in 9.2, without having to consider dropping the 1.0 version yet. Now, back to Extension Templates: the pg_dump output from the attached patch is not smart enough to cope with an extension that has been upgraded, it will only install the *default* version of it. There are two ways that I see about addressing that point: - implement default_full_version support for CREATE EXTENSION and have it working both in the case of file based installation and template based installation, then pg_dump work is really straightforward; CREATE EXTENSION pair VERSION '1.2'; -- will install 1.0 then update - add smarts into pg_dump to understand the shortest path of installation and upgrade going from the current default_version to the currently installed version of a template based extension so as to be able to produce the right order of commands, as e.g.: CREATE EXTENSION pair;-- default is 1.0 ALTER EXTENSION pair UPDATE TO '1.2'; -- updates to 1.1 then 1.2 As you might have guessed already, if I'm going to implement some smarts in the system to cope with installation time update paths, I'd rather do it once in the backend rather than hack it together in pg_dump only for the template based case. Should I merge the default_full_version patch into the Extension Template one, or would we rather first see about commiting the default one then the template one, or the other way around, or something else I didn't think about? Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support templates.v2.patch.gz Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [sepgsql 2/3] Add db_schema:search permission checks
On 01/29/2013 10:10 PM, Simon Riggs wrote: On 29 January 2013 13:30, Kohei KaiGai kai...@kaigai.gr.jp wrote: It makes unavailable to control execution of functions from viewpoint of selinux, and here is no way selinux to prevent to execute functions defined by other domains, or others being not permitted. Also, what we want to do is almost same as existing permission checks, except for its criteria to make access control decision. Do you have a roadmap of all the things this relates to? If selinux has a viewpoint, I'd like to be able to see a list of capabilities and then which ones are currently missing. I guess I'm looking for external assurance that someone somewhere needs this and that it fits into a complete overall plan of what we should do. Just like we are able to use SQLStandard as a guide as to what we need to implement, we would like something to refer back to. Does this have a request id, specification document page number or whatever? I think that would greatly assist people in understanding why these patches are neccessary, what real-world functionality they lead to, and what problems they solve. Some info on the wiki may be a good option. For example, if you were to say these changes will help with multi-tenant PostgreSQL installations by x then that will catch some people's eye. -- Craig Ringer http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Should pg_dump dump larger tables first?
David Rowley dgrowle...@gmail.com writes: If pg_dump was to still follow the dependencies of objects, would there be any reason why it shouldn't backup larger tables first? Pretty much every single discussion/complaint about pg_dump's ordering choices has been about making its behavior more deterministic not less so. So I can't imagine such a change would go over well with most folks. Also, it's far from obvious to me that largest first is the best rule anyhow; it's likely to be more complicated than that. But anyway, the right place to add this sort of consideration is in pg_restore --parallel, not pg_dump. I don't know how hard it would be for the scheduler algorithm in there to take table size into account, but at least in principle it should be possible to find out the size of the (compressed) table data from examination of the archive file. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Hm, table constraints aren't so unique as all that
I wrote: Over in the thread about enhanced error fields, I claimed that constraints are uniquely named among those associated with a table, or with a domain. But it turns out that that ain't necessarily so, because the code path for index constraints doesn't pay any attention to pre-existing check constraints: ... I think we need to tighten this down by having index-constraint creation check for conflicts with other constraint types. It also seems like it might be a good idea to put in a unique index to enforce the intended lack of conflicts --- note that the existing index on (conname, connamespace) isn't unique. It's a bit problematic that pg_constraint contains both table-related constraints and domain-related constraints, but it strikes me that we could get close enough by changing pg_constraint_conname_nsp_index to be a unique index on (conname, connamespace, conrelid, contypid). I experimented with changing pg_constraint's index that way. It doesn't seem to break anything, but it turns out not to fix the problem completely either, because if you use CREATE INDEX syntax to create an index then no pg_constraint entry is made at all. So it's still possible to have an index with the same name as some non-index constraint on the same table. If we wanted to pursue this, we could think about decreeing that every index must have a pg_constraint entry. That would have some attraction from the standpoint of catalog-entry uniformity, but there are considerable practical problems in the way as well. Notably, what would we do for the conkey field in pg_constraint for an expression index? (Failing to set that up as expected might well break client-side code.) Also, I think we'd end up with the pg_depend entry between the index and the constraint pointing in opposite directions depending on whether the index was made using CONSTRAINT syntax or CREATE INDEX syntax. There's some precedent for that with the linkage between pg_class entries and their pg_type rowtype entries, but that's a mess that I'd rather not replicate. Or we could leave the catalogs alone and just add more pre-creation checking for conflicts. That doesn't seem very bulletproof though because of possible race conditions. I think that right now it'd be safe enough because of the table-level locks taken by ALTER TABLE and CREATE INDEX --- but if the project to reduce ALTER TABLE's locking level ever gets resurrected, we'd be at serious risk of introducing a problem there. Or on the third hand, we could just say it's okay if there are conflicts between index names and check-constraint names. Any given SQLSTATE would only be mentioning one of these types of constraints, so it's arguable that there's not going to be any real ambiguity in practice. At the moment I'm inclined to leave well enough alone. Thoughts? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] COPY FREEZE has no warning
On Fri, Jan 25, 2013 at 11:28:58PM -0500, Bruce Momjian wrote: On Fri, Jan 25, 2013 at 11:08:56PM -0500, Tom Lane wrote: Bruce Momjian br...@momjian.us writes: ! ereport(ERROR, ! (ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE, ! errmsg(cannot perform FREEZE because of previous table activity in the current transaction))); [ itch... ] What is table activity? I always thought of tables as being rather passive objects. And anyway, isn't this backwards? What we're complaining of is *lack* of activity. I don't see why this isn't using the same message as the other code path, namely Well, here is an example of this message: BEGIN; TRUNCATE vistest; SAVEPOINT s1; COPY vistest FROM stdin CSV FREEZE; ERROR: cannot perform FREEZE because of previous table activity in the current transaction COMMIT; Clearly it was truncated in the same transaction, but the savepoint somehow invalidates the freeze. There is a C comment about it: The savepoint prevents the COPY FREEZE, because COPY FREEZE needs the table to have been created or truncated in the current *sub*transaction. Issuing RELEASE s1 before the COPY makes it work again, for example. * BEGIN; * TRUNCATE t; * SAVEPOINT save; * TRUNCATE t; * ROLLBACK TO save; * COPY ... This is different. The table was truncated in the current subtransaction, and it's safe in principle to apply the optimization. Due to an implementation artifact, we'll reject it anyway. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] BUG #7493: Postmaster messages unreadable in a Windows console
On Tue, Jan 29, 2013 at 09:54:04AM -0500, Tom Lane wrote: Alexander Law exclus...@gmail.com writes: Please look at the following l10n bug: http://www.postgresql.org/message-id/502a26f1.6010...@gmail.com and the proposed patch. That patch looks entirely unsafe to me. Neither of those functions should be expected to be able to run when none of our standard infrastructure (palloc, elog) is up yet. Possibly it would be safe to do this somewhere around where we do GUC initialization. Even then, I wouldn't be surprised to find problematic consequences beyond error display. What if all the databases are EUC_JP, the platform encoding is KOI8, and some postgresql.conf settings contain EUC_JP characters? Does the postmaster not rely on its use of SQL_ASCII to allow those values? I would look at fixing this by making the error output machinery smarter in this area before changing the postmaster's notion of server_encoding. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] lazy_vacuum_heap()'s removal of HEAPTUPLE_DEAD tuples
On Mon, Jan 28, 2013 at 07:24:04PM +0530, Pavan Deolasee wrote: On Wed, Jan 23, 2013 at 10:05 AM, Noah Misch n...@leadboat.com wrote: You're the second commentator to be skittish about the patch's correctness, so I won't argue against a conservatism-motivated bounce of the patch. Can you please rebase the patch against the latest head ? I see Alvaro's and Simon's recent changes has bit-rotten the patch. Attached. *** a/src/backend/access/heap/heapam.c --- b/src/backend/access/heap/heapam.c *** *** 5553,5584 HeapTupleHeaderAdvanceLatestRemovedXid(HeapTupleHeader tuple, } /* - * Perform XLogInsert to register a heap cleanup info message. These - * messages are sent once per VACUUM and are required because - * of the phasing of removal operations during a lazy VACUUM. - * see comments for vacuum_log_cleanup_info(). - */ - XLogRecPtr - log_heap_cleanup_info(RelFileNode rnode, TransactionId latestRemovedXid) - { - xl_heap_cleanup_info xlrec; - XLogRecPtr recptr; - XLogRecData rdata; - - xlrec.node = rnode; - xlrec.latestRemovedXid = latestRemovedXid; - - rdata.data = (char *) xlrec; - rdata.len = SizeOfHeapCleanupInfo; - rdata.buffer = InvalidBuffer; - rdata.next = NULL; - - recptr = XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_CLEANUP_INFO, rdata); - - return recptr; - } - - /* * Perform XLogInsert for a heap-clean operation. Caller must already * have modified the buffer and marked it dirty. * --- 5553,5558 *** *** 5930,5956 log_newpage_buffer(Buffer buffer) } /* - * Handles CLEANUP_INFO - */ - static void - heap_xlog_cleanup_info(XLogRecPtr lsn, XLogRecord *record) - { - xl_heap_cleanup_info *xlrec = (xl_heap_cleanup_info *) XLogRecGetData(record); - - if (InHotStandby) - ResolveRecoveryConflictWithSnapshot(xlrec-latestRemovedXid, xlrec-node); - - /* -* Actual operation is a no-op. Record type exists to provide a means for -* conflict processing to occur before we begin index vacuum actions. see -* vacuumlazy.c and also comments in btvacuumpage() -*/ - - /* Backup blocks are not used in cleanup_info records */ - Assert(!(record-xl_info XLR_BKP_BLOCK_MASK)); - } - - /* * Handles HEAP2_CLEAN record type */ static void --- 5904,5909 *** *** 7057,7065 heap2_redo(XLogRecPtr lsn, XLogRecord *record) case XLOG_HEAP2_CLEAN: heap_xlog_clean(lsn, record); break; - case XLOG_HEAP2_CLEANUP_INFO: - heap_xlog_cleanup_info(lsn, record); - break; case XLOG_HEAP2_VISIBLE: heap_xlog_visible(lsn, record); break; --- 7010,7015 *** a/src/backend/access/heap/pruneheap.c --- b/src/backend/access/heap/pruneheap.c *** *** 121,133 heap_page_prune_opt(Relation relation, Buffer buffer, TransactionId OldestXmin) * have pruned while we hold pin.) */ if (PageIsFull(page) || PageGetHeapFreeSpace(page) minfree) - { - TransactionId ignore = InvalidTransactionId; /* return value not - * needed */ - /* OK to prune */ ! (void) heap_page_prune(relation, buffer, OldestXmin, true, ignore); ! } /* And release buffer lock */ LockBuffer(buffer, BUFFER_LOCK_UNLOCK); --- 121,128 * have pruned while we hold pin.) */ if (PageIsFull(page) || PageGetHeapFreeSpace(page) minfree) /* OK to prune */ ! (void) heap_page_prune(relation, buffer, OldestXmin, true); /* And release buffer lock */ LockBuffer(buffer, BUFFER_LOCK_UNLOCK); *** *** 148,159 heap_page_prune_opt(Relation relation, Buffer buffer, TransactionId OldestXmin) * send its own new total to pgstats, and we don't want this delta applied * on top of that.) * ! * Returns the number of tuples deleted from the page and sets ! * latestRemovedXid. */ int heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin, ! bool report_stats, TransactionId *latestRemovedXid) { int ndeleted = 0; Pagepage = BufferGetPage(buffer); --- 143,153 * send its own new total to pgstats, and we don't want this delta applied * on top of that.) * ! * Returns the number of tuples deleted from the page. */ int heap_page_prune(Relation relation, Buffer buffer,
Re: [HACKERS] psql \l to accept patterns
(2013/01/30 0:34), Tom Lane wrote: Satoshi Nagayasu sn...@uptime.jp writes: On Mon, 2013-01-07 at 07:14 -0500, Peter Eisentraut wrote: Here is a patch for psql's \l command to accept patterns, like \d BTW, is there any good place to put new regression test for the psql command? I couldn't find it out. As far as a test for this specific feature goes, I'd be against adding one, because it'd be likely to result in random failures in make installcheck mode, depending on what other databases are in the installation. More generally, we've tended to put tests of \d-style psql features together with the relevant backend-side tests. Yes, I think so too. First of all, I was looking for some regression tests for CREATE/ALTER/DROP DATABASE commands, but I couldn't find them in the test/regress/sql/ directory. So, I asked the question. I guess these database tests are in pg_regress.c. Right? The proposed patch to add \gset adds a separate regression test file specifically for psql. I've been debating whether that was worth committing; but if there's near-term interest in adding any more tests for psql features that aren't closely connected to backend features, maybe it's worth having such a file. Personally, I'm interested in having regression tests whatever the target is, because software tends to be more complicated. So, if we reach consensus to have dedicated tests for the psql command (or other client-side commands), I wish to contribute to it. Regards, -- Satoshi Nagayasu sn...@uptime.jp Uptime Technologies, LLC. http://www.uptime.jp -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] psql \l to accept patterns
Satoshi Nagayasu sn...@uptime.jp writes: First of all, I was looking for some regression tests for CREATE/ALTER/DROP DATABASE commands, but I couldn't find them in the test/regress/sql/ directory. So, I asked the question. I guess these database tests are in pg_regress.c. Right? Yeah, we don't bother with explicit tests of CREATE/DROP DATABASE because that's inherently tested by creating/replacing the regression database(s). And these actions are expensive enough that I'm not eager to add several more of them to the test sequence without darn good reason. I'm not sure how much of ALTER DATABASE's functionality we're testing, though as you say pg_regress itself does some of that. It might be reasonable to add some more tests of ALTER cases. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] passing diff options to pg_regress
On Wed, 2013-01-16 at 14:35 +0530, Jeevan Chalke wrote: However, I think you need to add this in docs. Letting people know about this environment variable to make use of that. Done and committed. Thanks. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_ctl idempotent option
I don't think I like --force because it isn't clear if we are forcing the start to have done something, or forcing the server to be running. Do we need this idempotent feature for stop too? Yes, of course. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] BUG #7493: Postmaster messages unreadable in a Windows console
30.01.2013 05:51, Noah Misch wrote: On Tue, Jan 29, 2013 at 09:54:04AM -0500, Tom Lane wrote: Alexander Law exclus...@gmail.com writes: Please look at the following l10n bug: http://www.postgresql.org/message-id/502a26f1.6010...@gmail.com and the proposed patch. That patch looks entirely unsafe to me. Neither of those functions should be expected to be able to run when none of our standard infrastructure (palloc, elog) is up yet. Possibly it would be safe to do this somewhere around where we do GUC initialization. Looking at elog.c:write_console, and boostrap.c:AuxiliaryProcessMain, mcxt.c:MemoryContextInit I would place this call (SetDatabaseEncoding(GetPlatformEncoding())) at MemoryContextInit. (The branch of conversion pgwin32_toUTF16 is not executed until CurrentMemoryContext is not null) But I see some calls to ereport before MemoryContextInit. Is it ok or MemoryContext initialization should be done before? For example, main.c:main - pgwin32_signal_initialize - ereport And there is another issue with elog.c:write_stderr if (pgwin32_is_service) then the process writes message to the windows eventlog (write_eventlog), trying to convert in to UTF16. But it doesn't check MemoryContext before the call to pgwin32_toUTF16 (as write_console does) and we can get a crash in the following way: main.c:check_root - if (pgwin32_is_admin()) write_stderr - if (pgwin32_is_service()) write_eventlog - if (if (GetDatabaseEncoding() != GetPlatformEncoding() ) pgwin32_toUTF16 - crash So placing SetDatabaseEncoding(GetPlatformEncoding()) before the check_root can be a solution for the issue. Even then, I wouldn't be surprised to find problematic consequences beyond error display. What if all the databases are EUC_JP, the platform encoding is KOI8, and some postgresql.conf settings contain EUC_JP characters? Does the postmaster not rely on its use of SQL_ASCII to allow those values? I would look at fixing this by making the error output machinery smarter in this area before changing the postmaster's notion of server_encoding. Maybe I still miss something but I thought that postinit.c/CheckMyDatabase will switch encoding of a messages by pg_bind_textdomain_codeset to EUC_JP so there will be no issues with it. But until then KOI8 should be used. Regarding postgresql.conf, as it has no explicit encoding specification, it should be interpreted as having the platform encoding. So in your example it should contain KOI8, not EUC_JP characters. Thanks, Alexander -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_dump --pretty-print-views
Hi Marko, On Wed, Jan 30, 2013 at 2:07 AM, Marko Tiikkaja pgm...@joh.to wrote: On Tue, 29 Jan 2013 10:18:51 +0100, Jeevan Chalke jeevan.chalke@enterprisedb.**com jeevan.cha...@enterprisedb.com wrote: That's fine. I am not at all pointing that to you. Have a look at this: Here's the third version of this patch, hopefully this time without any problems. I looked through the patch and it looked OK, but I did that last time too so I wouldn't trust myself on that one. Looks good this time. Will mark Ready for committor However, I am not sure about putting WRAP_COLUMN_DEFAULT by default. In my opinion we should put that by default but other people may object so I will keep that in code committors plate. Thanks Regards, Marko Tiikkaja -- Jeevan B Chalke Senior Software Engineer, RD EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: +91 20 30589500 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message.