Re: [PATCHES] FW: Timezone library
Bruce Momjian wrote: > I have added this to CVS under src/timezone and have integrated this > into configure. I think it would be better to put it under one of the generic subdirectories, such as src/utils/timezone. We may have to add additional libraries of this kind in the future, and it would be better to have them grouped somehow. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PATCHES] width_bucket() per SQL2003 (WIP)
On 30-Apr-04, at 1:06 AM, Tom Lane wrote: Other than "it's in SQL2003", what's the point of implementing this? No idea, but it doesn't really matter IMHO: it is in SQL2003, it doesn't interfere with the rest of the system or impose a significant maintenance burden and it has essentially zero negative side effects. So why _not_ implement it? Trying to implement everything in Oracle is a goal up with which we must not try to keep. Granted, but that's not the case here. -Neil ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [PATCHES] width_bucket() per SQL2003 (WIP)
Other than "it's in SQL2003", what's the point of implementing this? It seems like a mighty marginal feature ... I don't see a problem with implementing every function that SQL2003 has...isn't that the point of implementing the standard? Chris ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PATCHES] FW: Timezone library
Bruce Momjian <[EMAIL PROTECTED]> writes: > /usr/include/time.h:104: `pg_timezone' redeclared as different kind of > symbol Your really defines "pg_timezone"?? I'm wondering if this is an indirect effect of a macro naming collision. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PATCHES] width_bucket() per SQL2003 (WIP)
Neil Conway <[EMAIL PROTECTED]> writes: > The attached patch implements the width_bucket() function (as specified > by Section 6.27 of the SQL2003 standard, in particular page 250 in my > copy). Other than "it's in SQL2003", what's the point of implementing this? It seems like a mighty marginal feature ... > I believe the function comes from Oracle 9i, Trying to implement everything in Oracle is a goal up with which we must not try to keep. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PATCHES] FW: Timezone library
I have added this to CVS under src/timezone and have integrated this into configure. However, I am getting a compile error, probably because my OS has a timezone function defined in time.h: gcc -O2 -fno-strict-aliasing -Wall -Wmissing-prototypes -Wmissing-declarations -O1 -Wall -Wmissing-prototypes -Wmissing-declarations -Wpointer-arith -Wcast-align -I../../src/include -I/usr/local/include/readline -I/usr/contrib/include -c -o localtime.o localtime.c In file included from private.h:91, from localtime.c:25: /usr/include/time.h:104: `pg_timezone' redeclared as different kind of symbol localtime.c:5: previous declaration of `pg_timezone' localtime.c:203: `pg_timezone' redeclared as different kind of symbol /usr/include/time.h:104: previous declaration of `pg_timezone' localtime.c:933: warning: static declaration for `tzsetwall' follows non-static gmake: *** [localtime.o] Error 1 Fortunately only Win32 compiles this code right now. --- Magnus Hagander wrote: > This mail apparantly didn't make it through because it was too large. > Resending it without the largest file, tzlib.tgz. I've put this file (+ > the patches) on http://www.hagander.net/pgsql/. > > > //Magnus > > > > -Original Message- > > From: Magnus Hagander > > Sent: Sunday, April 18, 2004 9:05 PM > > To: [EMAIL PROTECTED] > > Subject: Timezone library > > > > > > Hi! > > > > Attached is a slightly modified version of Arthur David > > Olson's timezone library for inclusion in postgresql, pending > > review of course. This is required to give win32 working > > timezone functions on dates before 1970. It has also been > > indicated that it might be interesting to have such a library > > on other platforms as well. Note that with this enabled, the > > timezone data files are installed in /share/timezone. > > > > Attached are the following files: > > * tzlib.tgz - the timezone implementation, including the > > datafiles required to build the timezone data files. > > * timezone.patch - the modifications made to existing files > > to make it work. This includes setting the FRONTEND flat for ecpg. > > * tzcode.diff - lists the modifications that I have made to > > the files from the original timezone files, for reference. > > > > > > Bruce has offered to do the autoconf and Makefile integration > > parts - I don't know autoconf enough to do that part... The > > #defines necessary are "activeted" by setting USE_PGTZ. Of > > course, the appropriate files also have to be linked in. > > > > > > With this patch applied, and the timezone data installed, > > win32 now passes all time-related regression tests (3 now > > fails, 2 "known" and one being locale-inflicted, still > > investigating that one) > > > > > > //Magnus > > > > > > > > > <> <> Content-Description: tzcode_changes.diff [ Attachment, skipping... ] Content-Description: timezone.patch [ Attachment, skipping... ] > > ---(end of broadcast)--- > TIP 4: Don't 'kill -9' the postmaster -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PATCHES] PITR Phase 1 - Full set of patches
On Thu, Apr 29, 2004 at 09:58:49PM -0400, Alvaro Herrera wrote: > Note that sometimes I do things incrementally, so I do "incremental > diffs" by commands like > cd /home/alvherre/CVS/pgsql/source > diff -cr --exclude-from=ignore-diff 01xact 02blahblah ^^^ Note that this should be -Ncr -- I don't create a lot of new files ... -- Alvaro Herrera () "Cuando no hay humildad las personas se degradan" (A. Christie) ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PATCHES] PITR Phase 1 - Full set of patches
On Thu, Apr 29, 2004 at 11:21:53PM +0100, Simon Riggs wrote: > ...my patch building experience is less than some might > expect so there are various possible annoyances here. I am on hand to > help and to learn by my mistakes. I see you attached one patch for every file ... this works, but is error prone. May I suggest another approach: 1. get CVSup if you can. I have Linux, so I got the ezm3 package and the CVSup package. Compile/install ezm3 (it's a Modula-3 compiler) first and then CVSup. Google for them, I don't have URLs. 2. get the whole Postgres repository using CVSup with this config file: >>>>>>>> *default host=cvsup.postgresql.org *default compress *default release=cvs *default delete use-rel-suffix # *default tag=. # base directory where CVSup will store its 'bookmarks' file(s) *default base=/home/alvherre/cvs # prefix directory where CVSup will store the actual distribution(s) *default prefix=/home/alvherre/cvs # complete distribution repository >>>>>>>> This will create a "cvsroot". Adjust the path, obviously. Note that case matters because I have a "cvs" directory with the cvsroot, and a "CVS" directory for the checkouts. This may be a stupid idea but it is how I did it some time ago :-) 3. Checkout a couple of trees cvs -d /home/alvherre/cvs co pgsql I have several pgsql trees: /home/alvherre/CVS/pgsql/source/00orig this is the pristine CVS tip. My first modification in /home/alvherre/CVS/pgsql/source/01xact second in /home/alvherre/CVS/pgsql/source/02blahblah etc, you get the idea. (I have a "74_rel" and "73_rel" too, just in case.) Then I created a "build" tree, in /home/alvherre/CVS/pgsql/build. So each time I want to compile, I create a /home/alvherre/CVS/pgsql/build/00orig, cd to it, and then do ../../source/00orig/configure --enable-debug [etc] --prefix=/home/alvherre/CVS/pgsql/install/00orig 4. To generate a patch, just do cd /home/alvherre/CVS/pgsql/01xact cvs -q diff Note that sometimes I do things incrementally, so I do "incremental diffs" by commands like cd /home/alvherre/CVS/pgsql/source diff -cr --exclude-from=ignore-diff 01xact 02blahblah The ignore-diff file is there because some files are built on the source tree and generate a lot of irrelevant differences. It contains >>>>>>>> pl_gram.c pl.tab.h pl_scan.c psqlscan.c sql_help.h preproc.c preproc.h pgc.c guc-file.c fmgrtab.c fmgroids.h parse.h gram.c scan.c bootparse.c bootstrap_tokens.h bootscanner.c *~ tags CVS .*sw[poq] >>>>>>>> So, by this time you are wondering why do I use a number as the start of the tree name. Well, this is because I have a script which allows me to - build (configure, make) - install (make install) - initdb - run a server - run a psql client - run a "make check" against a running server And I may want to do any combination of the above with any combination of the pgsql trees I have, simultaneously. So I picked a random number as "base", and the scripts takes the sum of this number and the number at the start of tree name, and uses it as the port (--with-pgport). So I can run all servers and clients, with no conflict. If there is any interest I can post the script too. It's somewhat ugly but it has worked for me. -- Alvaro Herrera () "La virtud es el justo medio entre dos defectos" (Aristóteles) ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PATCHES] subtransactions -- storage manager
Simon Riggs wrote: > On Sun, 2004-04-25 at 19:06, Alvaro Herrera wrote: > > Hackers, > > > > This patch adds subtransaction support into the storage manager. Files > > created or dropped inside a subtransaction are correctly dealt with at > > subtransaction commit or abort. > > > - pg_clog/pg_subtrans. Need a solution. > > > > > > PS: somehow I managed to get tired of the phrase "nested transactions" > > and I'm using the term "subtransactions" instead. In my head they are > > the same thing ... > > Impressive. > > As you're aware, our current work overlaps. > pg_clog doesn't seem like the place to record subtransactions, though > maybe it is... could we not give subtransactions a txnid just as with > flat transactions? That way we can record everything in pg_clog AND > recovery will work without further modification - as long as the failure > of a top level transaction causes failure of every subtransaction EVEN > if the subtrans originally committed. The problem is we have to atomically mark multiple transactions as committed/aborted in pg_clog. Each subtransaction does get its own xid, it is just that pg_subtrans maps each xid to is parent xid for use in atomically marking the xids as committed/aborted. Recovery via xlog should be fine. > If you add pg_subtrans, you will need to make recovery work all over > again...really, you don't want to be doing that, do you? > > I also have other questions > Forgive my lack of attention: I want SAVEPOINTs, not subtransactions... > how do we do those? Savepoints are basically just a BEGIN at the save point, and a ROLLBACK to get you back to the saved spot. It is just window-dressing on top of nested transactions. > My last focus on this was to do with SQL handling of transactional > rollback characteristics on error. PostgreSQL performs rollback on > complete txn when error occurs, rather than allowing statement level > abort and then retry...this was characterised as requiring "nested > transactions"...are your aware of this...is it on your roadmap. Yes, that is the whole point --- to allow individual queries to fail without rolling back the entire transaction. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PATCHES] SECURITY DEFINER not being propagated...
I think the attached patch addresses both of your concerns. Perhaps something like this will work, but the patch as given can't possibly be right (or have been tested with any care): Not tested in the slightest, actually. The attached has been, however is commented and tested. A larger problem is that the reason that control makes it through that path at the moment is this check in pg_namespace_aclcheck: /* * If we have been assigned this namespace as a temp namespace, assume * we have all grantable privileges on it. */ if (isTempNamespace(nsp_oid)) return ACLCHECK_OK; Yup, just figured that out. test=> SET search_path = pg_temp_2; test=> \dt List of relations Schema | Name | Type | Owner ---++---+--- pg_temp_2 | tmptbl | table | dba (1 row) test=> CREATE sequence tmp_seq; test=> \ds List of relations Schema | Name | Type | Owner ---+-+--+--- pg_temp_2 | tmp_seq | sequence | nxad (1 row) :-/ Which leads to a different problem with error reporting. I've changed pg_namespace_aclcheck() to the following: # BEGIN if (isTempNamespace(nsp_oid)) { if (pg_database_aclcheck(MyDatabaseId, GetUserId(), ACL_CREATE_TEMP) == ACLCHECK_OK) return ACLCHECK_OK; else return ACLCHECK_NO_PRIV; } # END Which works alright, but I'm worried you'll think it will lead the user astray if they don't have TEMP privs on the database and set their search path to an already existing be pg_temp_%d (created by a user who does have TEMP privs). I think it's reasonably easy to justify the confusion in that most users will be smacked with the 'permission denied to create temporary tables in database "test"' message when they try CREATE TEMP TABLE foo(i INT); since most users won't be using a FUNCTION to create temp tables. (Since the temp namespace is created as owned by the superuser, ordinary users would always fail to create temp tables without this escape hatch.) I am not at all convinced that this check could be removed, but I also wonder whether its presence doesn't create some issues that are security holes if we adopt your definition of how temp table creation ought to behave. With the aclcheck now moved into pg_namespace_aclcheck() - which gets used everywhere already - I would think this would be as secure as what we've got right now. For a bit I was concerned about a user and superuser creating a temp table with the same name and thought about creating a temp schema for each backend and each user, but backed off from that because it would add a fair amount of complexity. This is pretty much a non-argument, as there is no part of it that says that you have to revoke the right to create temp tables from Joe User. What is necessary and sufficient is that the particular temp table you want to keep your info in has to be owned by, and only accessible to, the more-privileged account. You need not muck with the temp namespace behavior before you can do that. Correct. I don't have to REVOKE TEMP privs in order to store info across transactions. I do need to REVOKE TEMP privs to reduce unauthorized users from creating temp tables and filling up my disk. I know I could simply, "not allow unauthorized users/clients from accessing your database," (I do that on 99/100 of my databases, but in this case I can't/don't want to) but I've got a device that runs PostgreSQL and is secured to the point that I've opened it up for public connections without fear of information loss to unauthorized parties. -sc Post patch: test=> SHOW search_path; search_path --- pg_temp_2, public (1 row) test=> SELECT public.setuid_wrapper(); -- Create's the temp table && schema setuid_wrapper t (1 row) test=> \dt List of relations Schema | Name | Type | Owner ---++---+--- pg_temp_2 | tmptbl | table | sean (1 row) test=> CREATE SEQUENCE tmp_seq; CREATE SEQUENCE test=> \ds List of relations Schema | Name| Type | Owner +---+--+--- public | tmp_seq | sequence | nxad (1 row) test=> CREATE SEQUENCE pg_temp_2.tmp2_seq; ERROR: permission denied for schema pg_temp_2 test=> CREATE FUNCTION pg_temp_2.foo() RETURNS BOOL AS 'BEGIN RETURN TRUE; END;' LANGUAGE 'plpgsql'; ERROR: permission denied for schema pg_temp_2 My only gripe about what I've is with the wording in the following use case: ## BEGIN -- The schema pg_temp_2 doesn't exist right now. test=> SHOW search_path ; search_path - pg_temp_2 (1 row) test=> CREATE sequence foo_seq; ERROR: no schema has been selected to create in test=> SELECT public.setuid_wrapper(); -- This function creates a temp namespace && table setuid_wrapper t (1 row) test=> CREATE sequence foo_seq; ERROR: no schema has been s
Re: [PATCHES] Patch for GUC custom variables
Here's the patch as an attachement instead. The previous one got corrupted. Pasting a patch into a mail seems to be a really bad idea... - thomas Index: doc/src/sgml/runtime.sgml === RCS file: /projects/cvsroot/pgsql-server/doc/src/sgml/runtime.sgml,v retrieving revision 1.262 diff -u -r1.262 runtime.sgml --- doc/src/sgml/runtime.sgml 22 Apr 2004 04:18:37 - 1.262 +++ doc/src/sgml/runtime.sgml 29 Apr 2004 22:07:21 - @@ -2924,6 +2924,60 @@ + +Customized Options + + + The following was designed to allow options not normally known to + PostgreSQL to be declared in the posgresql.conf + file and/or manipulated using the SET in a controlled + manner so that add-on modules to the postgres proper (such as lanugage + mappings for triggers and functions) can be configured in a unified way. + + + + + + custom_variable_classes (string) + custom_variable_classes + + +This variable specifies one or several classes to be used for custom +variables. A custom variable is a variable not normally known to +the PostgreSQL proper but used by some add +on module. + + + +Aribtrary variables can be defined for each class specified here. Those +variables will be treated as placeholders and have no meaning until the +module that defines them is loaded. When a module for a specific class is +loaded, it will add the proper variable definitions for the class +associated with it, convert any placeholder values according to those +definitions, and issue warnings for any placeholders that then remains. + + + +Here is an example what custom variables might look like: + + +custom_variable_class = 'plr,pljava' +plr.foo = '/usr/lib/R' +pljava.baz = 1 +plruby.var = true<== this one would generate an error + + + + +This option can only be set at server start or in the +postgresql.conf configuration file. + + + + + + + Developer Options Index: src/backend/utils/misc/guc-file.l === RCS file: /projects/cvsroot/pgsql-server/src/backend/utils/misc/guc-file.l,v retrieving revision 1.21 diff -u -r1.21 guc-file.l --- src/backend/utils/misc/guc-file.l 24 Feb 2004 22:06:32 - 1.21 +++ src/backend/utils/misc/guc-file.l 29 Apr 2004 22:07:22 - @@ -34,6 +34,7 @@ GUC_REAL = 4, GUC_EQUALS = 5, GUC_UNQUOTED_STRING = 6, + GUC_QUALIFIED_ID = 7, GUC_EOL = 99, GUC_ERROR = 100 }; @@ -65,6 +66,7 @@ LETTER_OR_DIGIT [A-Za-z_0-9\200-\377] ID {LETTER}{LETTER_OR_DIGIT}* +QUALIFIED_ID{ID}"."{ID} UNQUOTED_STRING {LETTER}({LETTER_OR_DIGIT}|[-._:/])* STRING \'([^'\n]|\\.)*\' @@ -76,6 +78,7 @@ #.*$/* eat comment */ {ID}return GUC_ID; +{QUALIFIED_ID} return GUC_QUALIFIED_ID; {STRING}return GUC_STRING; {UNQUOTED_STRING} return GUC_UNQUOTED_STRING; {INTEGER} return GUC_INTEGER; @@ -180,7 +183,7 @@ case 0: /* no previous input */ if (token == GUC_EOL) /* empty line */ continue; -if (token != GUC_ID) +if (token != GUC_ID && token != GUC_QUALIFIED_ID) goto parse_error; opt_name = strdup(yytext); if (opt_name == NULL) @@ -216,6 +219,24 @@ case 2: /* now we'd like an end of line */ if (token != GUC_EOL) goto parse_error; + + if (strcmp(opt_name, "custom_variable_classes") == 0) + { + /* This variable must be added first as it controls the validity +* of other variables +*/ + if (!set_config_option(opt_name, opt_value, context, + PGC_S_FILE, false, true)) + { + FreeFile(fp); + free(filename); + goto cleanup_exit; + } + + /* Don't include in list */ + parse_state = 0; + break; + } /* append to list */ item = malloc(sizeof *item); Index: src/backend/utils/misc/guc.c ==
Re: [PATCHES] subtransactions -- storage manager
On Sun, 2004-04-25 at 19:06, Alvaro Herrera wrote: > Hackers, > > This patch adds subtransaction support into the storage manager. Files > created or dropped inside a subtransaction are correctly dealt with at > subtransaction commit or abort. > - pg_clog/pg_subtrans. Need a solution. > > > PS: somehow I managed to get tired of the phrase "nested transactions" > and I'm using the term "subtransactions" instead. In my head they are > the same thing ... Impressive. As you're aware, our current work overlaps. pg_clog doesn't seem like the place to record subtransactions, though maybe it is... could we not give subtransactions a txnid just as with flat transactions? That way we can record everything in pg_clog AND recovery will work without further modification - as long as the failure of a top level transaction causes failure of every subtransaction EVEN if the subtrans originally committed. If you add pg_subtrans, you will need to make recovery work all over again...really, you don't want to be doing that, do you? I also have other questions Forgive my lack of attention: I want SAVEPOINTs, not subtransactions... how do we do those? My last focus on this was to do with SQL handling of transactional rollback characteristics on error. PostgreSQL performs rollback on complete txn when error occurs, rather than allowing statement level abort and then retry...this was characterised as requiring "nested transactions"...are your aware of this...is it on your roadmap. Best Regards, Simon Riggs ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[PATCHES] PITR Phase 1 - Full set of patches
Full set of patches, including both earlier reported bugs fixed. This note was originally posted at Tue, 27 Apr 2004 23:30:39 +0100, but for some reason hasn't yet appeared on the [HACKERS] list, so I am now reposting this. Trying PATCHES list now... Full usage instructions (for TESTING only!) Patches: guc.c.patch goes to src/backend/utils/misc/guc.c xlog.c.patch goes to src/backend/access/transam/xlog.c xlog.h.patch goes to src/include/access/xlog.h Makefile.patch goes to src/bin/Makefile pgarch.tar adds directory and files for src/bin/pg_arch Build: pg_arch builds as part of the full source tree... General Execution 1. create pg_rlog directory beneath PGDATA 2. add (optional) line to postgresql.conf wal_debug=1 3. add (required) line to postgresql.conf wal_archive=true 4. execute pg_arch /your-archive-dir $PGDATA 5. startup new postmaster 6. execute workload of choice to force the creation of xlogs 7. watch the execution... Recovery Test: 1. Work out ways of testing the logical validity of data 2. While postmaster is UP: take a full physical backup of PGDATA 3. wait for some time while transaction archives get posted 4. perform your choice of destructive action on postgresql 5. mv backup copy to a restore location, possibly where it just was 6. mv ALL transaction logs taken SINCE full backup 7. startup postmaster and watch 8. execute tests from 1, to ensure recovery successful ...my patch building experience is less than some might expect so there are various possible annoyances here. I am on hand to help and to learn by my mistakes. Report bugs to me at [EMAIL PROTECTED] and/or to the [HACKERS] list [EMAIL PROTECTED] Thanks, Best Regards Simon Riggs 2nd Quadrant http://www.2ndquadrant.com pgarch.tar Description: Unix tar archive *** guc.c.orig 2004-04-27 23:08:24.0 +0100 --- ./guc.c 2004-03-23 22:52:31.0 + *** *** 458,463 --- 459,474 &enableFsync, true, NULL, NULL }, + + { + {"wal_archive", PGC_SUSET, WAL_SETTINGS, + gettext_noop("Enabes archiving of WAL logs."), + gettext_noop("When enabled, PostgreSQL notifies an external archiver using the XLogArchive API") + }, + &XLogArchivePolicy, + true, NULL, NULL + }, + { {"zero_damaged_pages", PGC_SUSET, DEVELOPER_OPTIONS, gettext_noop("Continues processing past damaged page headers."), *** Makefile1.43 2004-04-24 09:56:30.0 +0100 --- Makefile 2004-04-24 09:59:02.0 +0100 *** *** 13,19 top_builddir = ../.. include $(top_builddir)/src/Makefile.global ! DIRS := initdb initlocation ipcclean pg_ctl pg_dump \ psql scripts pg_config pg_controldata pg_resetxlog all install installdirs uninstall depend distprep: --- 13,19 top_builddir = ../.. include $(top_builddir)/src/Makefile.global ! DIRS := initdb initlocation ipcclean pg_ctl pg_dump pg_arch\ psql scripts pg_config pg_controldata pg_resetxlog all install installdirs uninstall depend distprep: *** xlog.c.orig 2004-04-27 22:53:35.0 +0100 --- ./xlog.c 2004-04-27 22:01:15.0 +0100 *** *** 82,94 /* User-settable parameters */ int CheckPointSegments = 3; int XLOGbuffers = 8; int XLOG_DEBUG = 0; char *XLOG_sync_method = NULL; const char XLOG_sync_method_default[] = DEFAULT_SYNC_METHOD_STR; - char XLOG_archive_dir[MAXPGPATH]; /* null string means - * delete 'em */ /* * XLOGfileslop is used in the code as the allowed "fuzz" in the number of --- 82,93 /* User-settable parameters */ + bool XLogArchivePolicy = false; int CheckPointSegments = 3; int XLOGbuffers = 8; int XLOG_DEBUG = 0; char *XLOG_sync_method = NULL; const char XLOG_sync_method_default[] = DEFAULT_SYNC_METHOD_STR; /* * XLOGfileslop is used in the code as the allowed "fuzz" in the number of *** *** 396,401 --- 395,401 /* File path names */ static char XLogDir[MAXPGPATH]; + static char RLogDir[MAXPGPATH]; static char ControlFilePath[MAXPGPATH]; /* *** *** 437,442 --- 437,450 static bool InRedo = false; + static bool XLogArchiveNotify(uint32 log, uint32 seg); + #define XLA_NOTIFY_OK true + #define XLA_NOTIFY_ERROR false + + static bool XLogArchiveBusy(char xlog[MAXPGPATH]); + #define XLA_BUSY true + #define XLA_ARCHIVE_OK false + #define XLA_BUSY_ERROR 2 static bool AdvanceXLInsertBuffer(void); static void XLogWrite(XLogwrtRqst WriteRqst); *** *** 770,776 MyProc->logRec = RecPtr; } ! if (XLOG_DEBUG) { char buf[8192]; --- 778,784 MyProc->logRec = RecPtr; } ! if (XLOG_DEBUG==16) { char buf[8192]; *** *** 876,881 --- 884,1021 } /* + * XLogArchive API calls + * + * Two calls implement the PostgreSQL side of the XLogArchive API + * XLogArchiveNotify + * XLogArchiveBusy + */ + + /* + * XLogArchiveNotify +
Re: [PATCHES] Basic subtransaction facility
On Thu, Apr 29, 2004 at 06:42:31PM +0200, Manfred Koizar wrote: > On Wed, 28 Apr 2004 12:02:44 -0400, Alvaro Herrera > <[EMAIL PROTECTED]> wrote: > >In fact, I think we should mark ERROR as aborting the whole transaction > >tree, and create a new level which would abort the innermost > >subtransaction. We would then change whatever is appropiate to the new > >elevel. Doing otherwise would leave us open to unexpected conditions > >causing only subtrans abort, which could lead to unreliable behavior. > > Why? Subtransaction commit propagates an error state to the parent > transaction. And if a subtransaction is rolled back the parent can > continue cleanly no matter what was the reason for the subtrans abort. Not necessarily; consider can't-happen conditions, like everything that is marked elog(ERROR) rather than ereport(ERROR). Corrupt hashes, should-exist catalog entries that are not there, etc. They should not be frequent, be we should be prepared for them. -- Alvaro Herrera () "La virtud es el justo medio entre dos defectos" (Aristóteles) ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PATCHES] Basic subtransaction facility
On Thu, Apr 29, 2004 at 07:29:07PM +0200, Peter Eisentraut wrote: > Bruce Momjian wrote: > > I think his point was that there are some errors that should abort > > the outer transaction too. I think Alvaro mentioned out of memory, > > but that is a FATAL error. Alvaro, what error were you thinking of > > that should abort the outer transaction? > > Theoretically, if you abort the inner transaction, that could free up > memory for use by the outer transaction. Yes, this is planned to happen. -- Alvaro Herrera () "La tristeza es un muro entre dos jardines" (Khalil Gibran) ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PATCHES] Basic subtransaction facility
On Thu, Apr 29, 2004 at 02:42:23PM -0400, Tom Lane wrote: > In general I tend to agree with Manfred's point: if you have reason to > suspect global corruption of a backend's state then you should do FATAL > (or possibly PANIC). If you do not suspect this then you ought to just > ERROR. I do not see the use-case for abort-all-levels-of-xact-but- > don't-exit. Ok, I'm not wedded to the idea of a new elevel. So you think elog(ERROR) should rather be elog(FATAL) ? -- Alvaro Herrera () Y una voz del caos me habló y me dijo "Sonríe y sé feliz, podría ser peor". Y sonreí. Y fui feliz. Y fue peor. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PATCHES] Basic subtransaction facility
On Thu, Apr 29, 2004 at 12:26:01AM -0400, Bruce Momjian wrote: > I don't understand your elog(ERROR) vs. ereport(ERROR) distinction. Was > that a typo? Nope. When Tom upgraded the error handling, he changed almost everything to ereport(), but in the places where there's a violation of expected conditions, he retained elog(). We don't provide special error code, nor there is space for errhints etc. Those unexpected conditions I thought we could just abort the transaction tree; but maybe we have to close the backend as Manfred and Tom say. I don't think there's space for PANIC though (unless we suspect shared state corruption ... is that checked for anywhere? I haven't looked.) -- Alvaro Herrera () "No single strategy is always right (Unless the boss says so)" (Larry Wall) ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PATCHES] SECURITY DEFINER not being propagated...
Sean Chittenden <[EMAIL PROTECTED]> writes: > I think the attached patch addresses both of your concerns. Perhaps something like this will work, but the patch as given can't possibly be right (or have been tested with any care): > + aclresult = pg_namespace_aclcheck(MyDatabaseId, GetUserId(), > + ACL_CREATE_TEMP); Surely that should be pg_database_aclcheck() ... and the error reporting code just below won't be very appropriate for this case, either. Also a comment or five would be appropriate. A larger problem is that the reason that control makes it through that path at the moment is this check in pg_namespace_aclcheck: /* * If we have been assigned this namespace as a temp namespace, assume * we have all grantable privileges on it. */ if (isTempNamespace(nsp_oid)) return ACLCHECK_OK; (Since the temp namespace is created as owned by the superuser, ordinary users would always fail to create temp tables without this escape hatch.) I am not at all convinced that this check could be removed, but I also wonder whether its presence doesn't create some issues that are security holes if we adopt your definition of how temp table creation ought to behave. >>> At the moment, this behavior cripples the usefulness >>> of having a TEMP table be used as a trusted cache for data. >> >> What exactly do you think makes a temp table suitable as a trusted >> cache? Or more suitable than non-temp tables? > Revoke all privs on temp tables except from the DBA, then setup > functions to use the temp table as a way of maintaining state > information across transactions (within the same session). This is pretty much a non-argument, as there is no part of it that says that you have to revoke the right to create temp tables from Joe User. What is necessary and sufficient is that the particular temp table you want to keep your info in has to be owned by, and only accessible to, the more-privileged account. You need not muck with the temp namespace behavior before you can do that. regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PATCHES] Basic subtransaction facility
Bruce Momjian <[EMAIL PROTECTED]> writes: > Manfred Koizar wrote: >> Why? Subtransaction commit propagates an error state to the parent >> transaction. And if a subtransaction is rolled back the parent can >> continue cleanly no matter what was the reason for the subtrans abort. > I think his point was that there are some errors that should abort the > outer transaction too. I think Alvaro mentioned out of memory, but that > is a FATAL error. Nonsense. In the first place, out-of-memory hasn't been FATAL for years. In the second, there is no reason to think that we can't continue the outer transaction(s), as aborting the innermost one is likely to free quite a lot of memory. (And if it doesn't, well, the outer one will get its own out-of-memory ERROR soon enough.) In general I tend to agree with Manfred's point: if you have reason to suspect global corruption of a backend's state then you should do FATAL (or possibly PANIC). If you do not suspect this then you ought to just ERROR. I do not see the use-case for abort-all-levels-of-xact-but- don't-exit. regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [PATCHES] Basic subtransaction facility
Bruce Momjian wrote: > I think his point was that there are some errors that should abort > the outer transaction too. I think Alvaro mentioned out of memory, > but that is a FATAL error. Alvaro, what error were you thinking of > that should abort the outer transaction? Theoretically, if you abort the inner transaction, that could free up memory for use by the outer transaction. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PATCHES] Basic subtransaction facility
Manfred Koizar wrote: > On Wed, 28 Apr 2004 12:02:44 -0400, Alvaro Herrera > <[EMAIL PROTECTED]> wrote: > >In fact, I think we should mark ERROR as aborting the whole transaction > >tree, and create a new level which would abort the innermost > >subtransaction. We would then change whatever is appropiate to the new > >elevel. Doing otherwise would leave us open to unexpected conditions > >causing only subtrans abort, which could lead to unreliable behavior. > > Why? Subtransaction commit propagates an error state to the parent > transaction. And if a subtransaction is rolled back the parent can > continue cleanly no matter what was the reason for the subtrans abort. I think his point was that there are some errors that should abort the outer transaction too. I think Alvaro mentioned out of memory, but that is a FATAL error. Alvaro, what error were you thinking of that should abort the outer transaction? -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PATCHES] Basic subtransaction facility
On Wed, 28 Apr 2004 12:02:44 -0400, Alvaro Herrera <[EMAIL PROTECTED]> wrote: >In fact, I think we should mark ERROR as aborting the whole transaction >tree, and create a new level which would abort the innermost >subtransaction. We would then change whatever is appropiate to the new >elevel. Doing otherwise would leave us open to unexpected conditions >causing only subtrans abort, which could lead to unreliable behavior. Why? Subtransaction commit propagates an error state to the parent transaction. And if a subtransaction is rolled back the parent can continue cleanly no matter what was the reason for the subtrans abort. Servus Manfred ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PATCHES] SECURITY DEFINER not being propagated...
Without any rationale as to why CREATE TEMP TABLEs checks the session user in the archives, could we open this up for discussion again? Well, let me put it this way: if you want to change the behavior you're going to have to work much harder than just reverting the prior patch. IIRC the fundamental reason the code works that way is that InitTempTableNamespace is done only once per session. If it checks against current_user rather than session_user then (a) the results will be inconsistent, and (b) you create a different sort of security hole, which is that if a setuid function is the first to try to create a temp table in a particular session, then not-so-privileged functions will still be able to create temp tables later in the session. Hrm, I didn't realize that: points taken. Since temp schemas are always owned by the superuser, why aren't the ACL checks done when the temp relation is created as opposed to when the schema is created? I see what you're saying about things currently needing to use GetSessionuserId() instead of GetUserId(), but if a check for istemp is pushed down into DefineRelation(), then (from what I can tell) GetUserId() can be used in InitTempTableNamespace(). Object owners can only delete their objects, the temp schema can't be deleted as is because its owner is the superuser. I think the attached patch addresses both of your concerns. Things are consistent in that SECURITY DEFINER/TEMP TABLEs will now work as expected. The security hole isn't an issue because security checks are applied both in InitTempTableNamespace() and DefineRelation(). At the moment, this behavior cripples the usefulness of having a TEMP table be used as a trusted cache for data. What exactly do you think makes a temp table suitable as a trusted cache? Or more suitable than non-temp tables? Revoke all privs on temp tables except from the DBA, then setup functions to use the temp table as a way of maintaining state information across transactions (within the same session). It's a hack to get around the lack of server side variables. In many ways, actually, it works out better because I can wrap functions or PostgreSQL's permissions around the temp relations and get exactly the access that I need... far more fine grained than anything I could do with a GUC or some other server side MIB/variable implementation. I don't really believe in the notion of restricting temp table creation to setuid functions. AFAICS the only reason for forbidding temp table creation is to prevent a session from using any on-disk resources, and that hardly works if it can still do so via calling setuid functions. It can't populate or read data out of the temp relation though, which is ideal for my situation. -scIndex: catalog/namespace.c === RCS file: /projects/cvsroot/pgsql-server/src/backend/catalog/namespace.c,v retrieving revision 1.63 diff -u -r1.63 namespace.c --- catalog/namespace.c 13 Feb 2004 01:08:20 - 1.63 +++ catalog/namespace.c 29 Apr 2004 07:33:49 - @@ -1639,12 +1639,8 @@ * First, do permission check to see if we are authorized to make temp * tables. We use a nonstandard error message here since * "databasename: permission denied" might be a tad cryptic. -* -* Note we apply the check to the session user, not the currently active -* userid, since we are not going to change our minds about temp table -* availability during the session. */ - if (pg_database_aclcheck(MyDatabaseId, GetSessionUserId(), + if (pg_database_aclcheck(MyDatabaseId, GetUserId(), ACL_CREATE_TEMP) != ACLCHECK_OK) ereport(ERROR, (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), Index: commands/tablecmds.c === RCS file: /projects/cvsroot/pgsql-server/src/backend/commands/tablecmds.c,v retrieving revision 1.102 diff -u -r1.102 tablecmds.c --- commands/tablecmds.c1 Apr 2004 21:28:44 - 1.102 +++ commands/tablecmds.c29 Apr 2004 07:33:49 - @@ -165,8 +165,12 @@ { AclResult aclresult; - aclresult = pg_namespace_aclcheck(namespaceId, GetUserId(), - ACL_CREATE); + if (stmt->relation->istemp) + aclresult = pg_namespace_aclcheck(MyDatabaseId, GetUserId(), + ACL_CREATE_TEMP); + else + aclresult = pg_namespace_aclcheck(namespaceId, GetUserId(), + ACL_CREATE); if (aclresult != ACLCHECK_OK) aclcheck_error(aclresult, ACL_KIND_NAMESPACE,