[HACKERS] Small comment fix in sinvaladt.c

2013-07-30 Thread Hitoshi Harada
diff --git a/src/backend/storage/ipc/sinvaladt.c
b/src/backend/storage/ipc/sinvaladt.c
index 09f41c1..44d02c5 100644
--- a/src/backend/storage/ipc/sinvaladt.c
+++ b/src/backend/storage/ipc/sinvaladt.c
@@ -214,7 +214,7 @@ SInvalShmemSize(void)
 }

 /*
- * SharedInvalBufferInit
+ * CreateSharedInvalidationState
  * Create and initialize the SI message buffer
  */
 void



-- 
Hitoshi Harada


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pass-through queries to foreign servers

2013-07-30 Thread Tom Lane
David Fetter  writes:
> On Tue, Jul 30, 2013 at 04:40:38PM -0700, David Gudeman wrote:
>> When you write an application involving foreign tables, you frequently
>> end up with queries that are just too inefficient because they bring
>> too much data over from the foreign server. For a trivial example,
>> consider "SELECT count(*) FROM t" where t is a foreign table. This
>> will pull the entire table over the network just to count up the rows.

> Yes, and this case is a known limitation of our planner
> infrastructure.   Aggregates are "special" when it comes to
> generating paths for the planner to evaluate, so there's no current
> way a FDW could supply such info to the planner, and hence no API in
> our FDW code for having FDWs supply that info.  That's probably a
> "should fix" but I don't know whether a project that size could be
> done by 9.4.

Yeah.  There's a lot left to be done in the FDW infrastructure.
But not this:

> All that said, my DBI-Link, back in the bad old days, provided two
> important functions: remote_select(), which returned SETOF RECORD and
> remote_execute(), which returned nothing.  It also provided ways to
> control connections to the remote host, introspect remote schemas,
> etc., etc.  We need capabilities like that in the FDW API, I believe
> we could have them by 9.4.

I would argue we *don't* want that.  If you want pass-through queries
or explicit connection control, your needs are already met by dblink or
dbi-link.  The whole point of FDW is that it's at a higher level of
abstraction than that; which offers greater ease of use and will
eventually offer better optimization than what you can get from dblink
et al.  If we start trying to shoehorn things like passthrough queries
into FDW, we'll be crippling the technology.  As an example, somebody
on planet postgresql was just recently surprised to find that postgres_fdw
honors transaction rollback.  Well, it can do that because users can't
disconnect the connection underneath it, nor issue passthrough
commit/rollback commands.  You don't get to have it both ways.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Backup throttling

2013-07-30 Thread Gibheer
On Wed, 24 Jul 2013 09:20:52 +0200
Antonin Houska  wrote:

> Hello,
> the purpose of this patch is to limit impact of pg_backup on running 
> server. Feedback is appreciated.
> 
> // Antonin Houska (Tony)

Hi,

That is a really nice feature. I took a first look at your patch and
some empty lines you added (e.g. line 60 your patch). Can you remove
them?

Why did you move localGetCurrentTimestamp() into streamutil.c? Is
sys/time.h still needed in receivelog.c after the move?

I will try your patch later today to see, if it works.

regards,

Stefan Radomski


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Computer VARSIZE_ANY(PTR) during debugging

2013-07-30 Thread Peter Geoghegan
On Tue, Jul 30, 2013 at 10:33 AM, Greg Stark  wrote:
> I think there's some magic in gdb for this but I'm not sure how to
> make it happen. If you figure it out I would think it would be
> generally useful and we should find a way to put it in the source tree
> so it works for everyone.

You can write custom pretty printers for varlena types using GDB's
pretty printers (Python bindings expose this stuff). You can even
differentiate between text and bytea, even though they're both just
typedefs for varlena. I've done this myself in the past, but
unfortunately I don't control the source code. I can tell you that the
bindings are excellent, though.

I was even able to do things like printing output more or less
equivalent to what MemoryContextStats() dumps, but directly from GDB
(i.e I could walk the tree of memory contexts), even though there is a
bunch of private macros involved - I essentially re-rewrote
AllocSetStats() in weirdly C-like Python.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Computer VARSIZE_ANY(PTR) during debugging

2013-07-30 Thread Andres Freund
Hi,

On 2013-06-26 13:27:15 +0900, Amit Langote wrote:
> Is it possible to compute VARSIZE_ANY(PTR) during debugging?
> 
> -
> #define VARSIZE_ANY(PTR) \
> (VARATT_IS_1B_E(PTR) ? VARSIZE_1B_E(PTR) : \
>  (VARATT_IS_1B(PTR) ? VARSIZE_1B(PTR) : \
>   VARSIZE_4B(PTR)))
> 
> #define VARATT_IS_1B_E(PTR) \
> varattrib_1b *) (PTR))->va_header) == 0x80)
> ---
> 
> I tried using above expression, but it gives following:
> 
> (gdb) p varattrib_1b *) ( tp+off ))->va_header) == 0x80)
> No symbol "varattrib_1b" in current context.

FWIW, for me, just replacing typedefs in such cases by the actual
struct's name often works. Unfortunately varattrib_1b is an anonymous
struct, but that's easy enough to change.
In HEAD it seems enough to replace the usages in VARTAG_SIZE by the
actual structs. Like in the attached patch.

If you compile postgres with -g3 or higher, it will include most macro
definitions in the binary. If you then additionally define:
macro define __builtin_offsetof(T, F) ((int) &(((T *) 0)->F))
macro define __extension__

In your .gdbinit, many macros work OOTB.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
diff --git a/src/include/postgres.h b/src/include/postgres.h
index cb9d196..4bf98cf 100644
--- a/src/include/postgres.h
+++ b/src/include/postgres.h
@@ -98,8 +98,8 @@ typedef enum vartag_external
 } vartag_external;
 
 #define VARTAG_SIZE(tag) \
-	((tag) == VARTAG_INDIRECT ? sizeof(varatt_indirect) :		\
-	 (tag) == VARTAG_ONDISK ? sizeof(varatt_external) : \
+	((tag) == VARTAG_INDIRECT ? sizeof(struct varatt_indirect) :		\
+	 (tag) == VARTAG_ONDISK ? sizeof(struct varatt_external) : \
 	 TrapMacro(true, "unknown vartag"))
 
 /*

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Add support for REFRESH MATERIALIZED VIEW CONCURRENTLY.

2013-07-30 Thread Andres Freund
Hi Kevin,

On 2013-07-23 09:27:34 -0700, Kevin Grittner wrote:
> Andres Freund  wrote:
> 
> > Hm. There seems to be more things that need some more improvement
> > from a quick look.
> >
> > First, I have my doubts of the general modus operandy of
> > CONCURRENTLY here. I think in many, many cases it would be faster
> > to simply swap in the newly built heap + indexes in concurrently.
> > I think I have seen that mentioned in the review while mostly
> > skipping the discussion, but still. That would be easy enough as
> > of Robert's mvcc patch.
> 
> Yeah, in many cases you would not want to choose to specify this
> technique. The point was to have a technique which could run
> without blocking while a heavy load of queries (including perhaps a
> very long-running query) was executing.  If subsequent events
> provide an alternative way to do that, it's worth looking at it;
> but my hope was to get this out of the way to allow time to develop
> incremental maintenance of matviews for the next CF.

I thought there was a pretty obvious way to do this concurrently. But
after mulling over it for a few days I either miss a connection I made
previously or, more likely, I have missed some fundamental difficulties
previously.

I'd would be fairly easy if indexes were attached to a relations
relfilenode, not a relations oid... But changing that certainly doesn't
fall in the "easy enough" category anymore.

> The bigger point is perhaps syntax.  If MVCC catalogs are really at
> a point where we can substitute a new heap and new indexes for a
> matview without taking an AccessExclusiveLock, then we should just
> make the current code do that.  I think there is still a place for
> a differential update for large matviews which only need small
> changes on a REFRESH, but perhaps the right term for that is not
> CONCURRENTLY.

Hm. Yes. Especially as any CONCURRENTLY implementation building and
swapping in a new heap that I can think of requires to be run outside a
transaction (like CIC et al).

> > * Shouldn't the GetUserIdAndSecContext(), NewGUCNestLevel() dance
> >   be done a good bit earlier? We're executing queries before
> >   that.
> 
> This can't be in effect while we are creating temp tables.  If
> we're going to support a differential REFRESH technique, it must be
> done more "surgically" than to wrap the whole REFRESH statement
> execution.

Sure, it cannot be active when creating the temp table. But it's
currently inactive while you SPI_execute queries. That can't be right.

> >   I'd even suggest using BuildIndexInfo() or such on the indexes,
> >   then you could use ->ii_Expressions et al instead of peeking
> >   into indkeys by hand.
> 
> Given that the function is in a source file described as containing
> "code to create and destroy POSTGRES index relations" and the
> comments for that function say that it "stores the information
> about the index that's needed by FormIndexDatum, which is used for
> both index_build() and later insertion of individual index tuples,"
> and we're not going to create or destroy an index or call
> FormIndexDatum or insert individual index tuples from this code, it
> would seem to be a significant expansion of the charter of that
> function.  What benefits do you see to using that level?

I'd prevent you from needing to peek into indkeys. Note that it's
already used in various places.

> > * All not explicitly looked up operators should be qualified
> >   using OPERATOR(). It's easy to define your own = operator for
> >   tid and set the search_path to public,pg_catalog. Now, the
> >   whole thing is restricted to talbe owners afaics, making this
> >   decidedly less dicey, but still. But if anyobdy actually uses a
> >   search_path like the above you can easily hurt them.
> > * Same seems to hold true for the y = x and y.* IS DISTINCT FROM
> >   x.* operations.
> > * (y.*) = (x.*) also needs to do proper operator lookups.
> 
> I wasn't aware that people could override the equality operators
> for tid and RECORD, so that seemed like unnecessary overhead.  I
> can pull the operators from the btree AM if people can do that.

Sure, You don't need any special privs for it:
CREATE OPERATOR public.= (
LEFTARG = tid,
RIGHTARG = tid,
PROCEDURE = tideq);
That's obviously not doing anything nefarious, since it calls the
builtin function, but you get the idea.

I think for the cases where you're comparing tids it's fine to just to
hardcode the operator to OPERATOR(pg_catalog.=).

> > * OpenMatViewIncrementalMaintenance() et al seem to be
> >   underdocumented.
> 
> If the adjacent comment for
> MatViewIncrementalMaintenanceIsEnabled() left you at all uncertain
> about what OpenMatViewIncrementalMaintenance() and
> CloseMatViewIncrementalMaintenance() are for, I can add comments.
> At the time I wrote it, it seemed to me to be redundant to have
> such comments, and would cause people to waste more time looking at
> them then would be saved by anything they could add to what th

Re: [HACKERS] Computer VARSIZE_ANY(PTR) during debugging

2013-07-30 Thread Amit Langote
On Wed, Jul 31, 2013 at 2:33 AM, Greg Stark  wrote:
> I think there's some magic in gdb for this but I'm not sure how to
> make it happen. If you figure it out I would think it would be
> generally useful and we should find a way to put it in the source tree
> so it works for everyone.
>
> You might find it useful to put breakpoints in heap_deformtuple() with
> conditions that catch the tuple you're looking for. That function will
> often (but not always) be the first function to see your corrupt
> tuple.


With the core dump using which I worked on this problem about a month
back, I couldn't find heap_deformtuple() in the code path that
resulted in the segfault. As I said, it was slot_deform_tuple(). Is it
possible that it would be in the code path for a query like:

select bpcharcol1, bpcharcol2, int4col3, bpcharcol4,
octet_length(textcol5) from table where octet_length(textcol5) >
800;

-- 
Amit Langote


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pass-through queries to foreign servers

2013-07-30 Thread David Fetter
On Tue, Jul 30, 2013 at 04:40:38PM -0700, David Gudeman wrote:
> When you write an application involving foreign tables, you frequently
> end up with queries that are just too inefficient because they bring
> too much data over from the foreign server. For a trivial example,
> consider "SELECT count(*) FROM t" where t is a foreign table. This
> will pull the entire table over the network just to count up the rows.
> If the writer of the foreign data wrapper was clever enough, this may
> only pull one column from the foreign server, but that can still be a
> lot of data.

Yes, and this case is a known limitation of our planner
infrastructure.   Aggregates are "special" when it comes to
generating paths for the planner to evaluate, so there's no current
way a FDW could supply such info to the planner, and hence no API in
our FDW code for having FDWs supply that info.  That's probably a
"should fix" but I don't know whether a project that size could be
done by 9.4.

All that said, my DBI-Link, back in the bad old days, provided two
important functions: remote_select(), which returned SETOF RECORD and
remote_execute(), which returned nothing.  It also provided ways to
control connections to the remote host, introspect remote schemas,
etc., etc.  We need capabilities like that in the FDW API, I believe
we could have them by 9.4.

I don't know how to solve your problem within the context of our
current FDW API, but I think it's common enough that we do need to
solve it as above.

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pass-through queries to foreign servers

2013-07-30 Thread David Gudeman
When you write an application involving foreign tables, you frequently
end up with queries that are just too inefficient because they bring
too much data over from the foreign server. For a trivial example,
consider "SELECT count(*) FROM t" where t is a foreign table. This
will pull the entire table over the network just to count up the rows.
If the writer of the foreign data wrapper was clever enough, this may
only pull one column from the foreign server, but that can still be a
lot of data.

To solve (or work around) this problem, it would be convenient to have
a pass-through query mechanism associated with foreign servers. A
pass-through query would look like a table function, but would use the
name of the foreign server as the function name. For example:

CREATE SERVER foo ...;
CREATE USER MAPPING ...;
CREATE FOREIGN TABLE t (...) SERVER foo ... OPTIONS (table 't');

SELECT size FROM foo('SELECT count(*) FROM t') AS t(size BIGINT);

The SELECT above will execute the quoted string as a query on the
foreign server represented by foo. (Notice that only the CREATE SERVER
and CREATE USER MAPPING are needed for the SELECT to work. I just
added the CREATE FOREIGN TABLE for context.)

I can think of two ways to implement this. I think it would pretty
easy to just add a table function foo that does the right thing. This
would require the author of the foreign data wrapper to provide
another callback function to send the query and get back the results.
Such a callback function would largely duplicate the functionality of
the current callback functions and --because of the current
implementation of table functions-- it would materialize the entire
result set before returning it.

A more difficult solution (for me, at least) would be to construct a
sort of temporary foreign table from the pass-through query then let
it go through the usual foreign-table handling code. This also would
require some changes to foreign data wrappers. Current wrappers have
to construct a query to scan a foreign table but with a pass-through
query the query is already constructed. But this probably requires
less work for the authors of foreign data wrappers and it doesn't
materialize the results of the foreign query unnecessarily.

Any suggestions or hints?

Regards,
David Gudeman
http://unobtainabol.blogspot.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: ALTER SYSTEM SET command to change postgresql.conf parameters (RE: Proposal for Allow postgresql.conf values to be changed via SQL [review])

2013-07-30 Thread Stephen Frost
* Greg Stark (st...@mit.edu) wrote:
> I used to be a Debian package maintainer. I packaged kerberos and afs and a
> number of smaller packages. I know why Debian separates config files.

Ditto (though I suppose I'm still technically "active").

> If a file is automatically maintained then it's internal program state just
> like the list of users, databases, tables, etc.

Right.

> Programs keep their state in /var. It would be just as wrong to put
> auto.conf in /etc as it is to put the existing postgresql.conf in PGDATA.

Agreed.

Though, I have to say that I think we need "pg_hba.d" and "pg_ident.d"
more than we need a conf.d for postgresql.conf snippets..  And ditto for
auto.conf-style managed-through-PG options for pg_hba and pg_ident.
Apologies for not having been able to keep up with this thread fully, so
feel free to point out if this has already been discussed. :)

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] [9.3 bug] disk space in pg_xlog increases during archive recovery

2013-07-30 Thread MauMau

From: "Fujii Masao" 
So, I think it is a bug that the disk space increases if not using 
cascading
replication.  Those who migrated from 9.1 and do not use 9.2 features 
would

be surprised like me.


Do you think this should be fixed?


I think so.


Thanks for your agreement.  I'll try to submit the patch as soon as possible 
so that the fix will be included in the next minor release and 9.3.  If it 
seems unexpectedly difficult, I'd like to consult you.




How should it be fixed?


What about removing the restored archived file as soon as it's replayed
if cascading replication is not enabled (i.e., max_wal_senders = 0 or
hot_standby = off)? This doesn't seem to break the existing behavior
in 9.2.


I'll consider this condition, but I wonder if 9.1 users are setting 
max_wal_senders>0 and hot_standby=on even when just performing archive 
recovery (e.g. to use the same recovery.conf for all cases).  What do you 
think?  Is there any other good condition to judge if cascading replication 
is used?


Regards
MauMau



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] install libpq.dll in bin directory on Windows / Cygwin

2013-07-30 Thread MauMau

From: "Andrew Dunstan" 
I don't have a problem adding them to the bin directory. I'd be very 
slightly wary of removing them from the lib directory, for legacy reasons. 
Maybe these changes should be done for git tip and not backported (or 
maybe just to 9.3). Or we could just decide to clean this mess in one fell 
swoop.


I think just adding them to the bin is enough for convenience.  I don't 
think backporting to 9.2 and before is necessary, if we consider this change 
as an improvement.  I'd appreciate it if you could do that for 9.3 and 
later.


From: "Andres Freund" 

On 2013-07-28 15:37:49 -0400, Andrew Dunstan wrote:

>BTW, why is libpq.dll in lib necessary?  For the above files?  If so, we
>can remove libpq.dll from lib.  Or, libpqwalreceiver.dll needs it?

Not sure. Perhaps you could experiment and see if anything bad happens if
libpq is just installed in the bin directory and not in lib.


Yes, it does need it.


Just out of curiosity, what needs libpq.dll in lib?


Regards
MauMau



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: ALTER SYSTEM SET command to change postgresql.conf parameters (RE: Proposal for Allow postgresql.conf values to be changed via SQL [review])

2013-07-30 Thread Greg Stark
On Jul 30, 2013 7:32 PM, "Josh Berkus"  wrote:
>
>
> I don't think you understood GSmith's argument.  For Debian/Ubuntu
> sysadmins, configuration files live in /etc/, *period*.  Even ones which
> were automatically generated.  The packagers and scripters of that OS
> have taken a significant amount of trouble to make that so, including
> writing their own wrapper scripts around utilities for Postgres, Apache,
> etc., in order to support putting all configuration files in /etc/.
>

I used to be a Debian package maintainer. I packaged kerberos and afs and a
number of smaller packages. I know why Debian separates config files.

If a file is automatically maintained then it's internal program state just
like the list of users, databases, tables, etc.

Programs keep their state in /var. It would be just as wrong to put
auto.conf in /etc as it is to put the existing postgresql.conf in PGDATA.


Re: [HACKERS] Re: ALTER SYSTEM SET command to change postgresql.conf parameters (RE: Proposal for Allow postgresql.conf values to be changed via SQL [review])

2013-07-30 Thread Josh Berkus
On 07/30/2013 11:12 AM, Greg Stark wrote:
> But if we're going to insist that conf.d be in PGDATA then I'm saying
> we don't need a second conf.d just to contain that one file. And if we
> let distributions and sysadmins drop files in there then we'll be back
> to square 0 with wanting to separate them.

Ah, misunderstood your point.  We certainly don't need *two* conf.d's.

> Greg Smith's argument was about recovery.conf which is a file that
> users are expected to edit. A file which user's are not expected to
> edit and is maintained by the software is no more a configuration file
> than pg_auth or pg_database which are actually being stored in the
> database itself.

I don't think you understood GSmith's argument.  For Debian/Ubuntu
sysadmins, configuration files live in /etc/, *period*.  Even ones which
were automatically generated.  The packagers and scripters of that OS
have taken a significant amount of trouble to make that so, including
writing their own wrapper scripts around utilities for Postgres, Apache,
etc., in order to support putting all configuration files in /etc/.

Now we're proposing to break that *again*, by putting yet another
configuration file in PGDATA, and making it impossible to relocate.
That's fairly hostile to the concerns of possibly our most popular OSes.

Saying "it's not a configuration file because it's automatic" is rather
disingenuous.  If you can set work_mem via postgresql.auto.conf, it's a
configuration file, regardless of the means you used to set it.

> Having an automatically maintained file and a manually maintained file
> is going to raise some tricky problems. In Oracle they have exactly
> the same issue. In that case the automatically maintained one is
> actually kept in a binary format. You can dump it out in text format
> but unless you switch which one you're using it doesn't read the text
> format file at all. I assume they did this because working out the
> conflicts between the two was just too complex for users.

I'm not sure we won't end up in the same place as Oracle, eventually.
We'll see.

Re: allowing users to disable ALTER SYSTEM SET, I think we'll have to
implement that before 9.4. comes out, for the simple reason that users
of Puppet and other large-scale configuration management utilities will
demand it.  If you're managing 120 PostgreSQL servers using centrally
version-controlled Chef recipes, the last thing in the world you want is
having your DBAs bypass that via ALTER SYSTEM SET.

HOWEVER, I think adding the ability to disable ALTER SYSTEM SET can (and
should) be a second, separate patch.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: ALTER SYSTEM SET command to change postgresql.conf parameters (RE: Proposal for Allow postgresql.conf values to be changed via SQL [review])

2013-07-30 Thread Greg Stark
On Tue, Jul 30, 2013 at 6:40 PM, Josh Berkus  wrote:
>
> On 07/30/2013 10:28 AM, Greg Stark wrote:> On Tue, Jul 30, 2013 at 6:10
> PM, Alvaro Herrera
>> Well more to the point, if we conf.d for sysadmins to drop in extra
>> snippets in a different place then we could drop conf.d in PGDATA
>> since there's no need for it any more and just have auto.conf in
>> PGDATA directly.
>
> That assumes that the only reason we have for a conf.d is to support
> auto.conf, which I don't agree with; I personally have a need for conf.d
> which has nothing to do with auto.conf.

No, wait. We had two reasons for conf.d and we thought we could kill
two birds with one stone. It seems it's turning out that we can't kill
both birds with the same stone because we want to separate the user
maintained config from the automatically maintained config even
further than just having them in separate files. Now we need them in
separate directories.

If you have a need for conf.d which has nothing to do with auto.conf
then you presumably will want to be editing the system conf.d which
might be in PGDATA or might be in /etc or might be someplace else. I
totally agree that conf.d is useful for sysadmins as well as for
distribution authors.

But if we're going to insist that conf.d be in PGDATA then I'm saying
we don't need a second conf.d just to contain that one file. And if we
let distributions and sysadmins drop files in there then we'll be back
to square 0 with wanting to separate them.

I guess what I'm saying is that auto.conf shouldn't be in conf.d after
all. It should just be in PGDATA directly and trying to kill two birds
with one stone led us astray. Putting it in conf.d creates a raft of
problems of what if conf.d is moved and what if it's shared between
databases and what if the user puts files that come after it, etc.
Putting it directly in PGDATA lets us manage those issues however we
want and ties the automatic file to a specific database regardless of
how we handle conf.d in the future.

> Also, see gsmith's point about forcing auto.conf to be in PGDATA as not
> satisfactory for Debian users (and others).

Greg Smith's argument was about recovery.conf which is a file that
users are expected to edit. A file which user's are not expected to
edit and is maintained by the software is no more a configuration file
than pg_auth or pg_database which are actually being stored in the
database itself.

Having an automatically maintained file and a manually maintained file
is going to raise some tricky problems. In Oracle they have exactly
the same issue. In that case the automatically maintained one is
actually kept in a binary format. You can dump it out in text format
but unless you switch which one you're using it doesn't read the text
format file at all. I assume they did this because working out the
conflicts between the two was just too complex for users.

I think we're fine with allowing users to use both but we should try
to keep the two as separate as possible to avoid confusion. Keeping
the auto.conf inside conf.d sounds like it will actually confuse users
about which files they're supposed to edit and which belong to the
system.

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: ALTER SYSTEM SET command to change postgresql.conf parameters (RE: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review])

2013-07-30 Thread Josh Berkus
On 07/30/2013 10:10 AM, Alvaro Herrera wrote:
>> Once we have consensus on these issues, then hopefully we can commit
>> your patch.  I don't *think* anyone is arguing with the code at this stage.
> 
> There's one other thing, raised by Cédric upthread, about the multiple
> files vs. single file.  We decided on a single file because Greg said
> many little files scattered in conf.d/ would be a mess; but if we're

That's not what I get out of the discussion thread.  I believed we
settled on One File because that's the patch Amit wrote, and a patch in
the hand is worth two in WIP.

> going to have two new items, one conf.d/ for snippets from external
> tools (living in the config directory) and one for ALTER SYSTEM (living
> in PGDATA), then the idea of going back to one setting per file is
> perhaps not bad.

On 07/30/2013 10:28 AM, Greg Stark wrote:> On Tue, Jul 30, 2013 at 6:10
PM, Alvaro Herrera
> Well more to the point, if we conf.d for sysadmins to drop in extra
> snippets in a different place then we could drop conf.d in PGDATA
> since there's no need for it any more and just have auto.conf in
> PGDATA directly.

That assumes that the only reason we have for a conf.d is to support
auto.conf, which I don't agree with; I personally have a need for conf.d
which has nothing to do with auto.conf.

Also, see gsmith's point about forcing auto.conf to be in PGDATA as not
satisfactory for Debian users (and others).

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Computer VARSIZE_ANY(PTR) during debugging

2013-07-30 Thread Greg Stark
I think there's some magic in gdb for this but I'm not sure how to
make it happen. If you figure it out I would think it would be
generally useful and we should find a way to put it in the source tree
so it works for everyone.

You might find it useful to put breakpoints in heap_deformtuple() with
conditions that catch the tuple you're looking for. That function will
often (but not always) be the first function to see your corrupt
tuple.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: ALTER SYSTEM SET command to change postgresql.conf parameters (RE: Proposal for Allow postgresql.conf values to be changed via SQL [review])

2013-07-30 Thread Greg Stark
On Tue, Jul 30, 2013 at 6:10 PM, Alvaro Herrera
 wrote:
> one for ALTER SYSTEM (living
> in PGDATA), then the idea of going back to one setting per file is
> perhaps not bad.

Well more to the point, if we conf.d for sysadmins to drop in extra
snippets in a different place then we could drop conf.d in PGDATA
since there's no need for it any more and just have auto.conf in
PGDATA directly.


-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: ALTER SYSTEM SET command to change postgresql.conf parameters (RE: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review])

2013-07-30 Thread Alvaro Herrera
Josh Berkus escribió:
> Amit,
> 
> > I had already sent the updated patch based on recent suggestions.
> 
> Yes, but there is no consensus yet on certain fundamental issues, such as:
> 
> 1. what directory should postgresql.auto.conf live in?
> 
> 2. should admins be able to turn it "off"?
> 
> Once we have consensus on these issues, then hopefully we can commit
> your patch.  I don't *think* anyone is arguing with the code at this stage.

There's one other thing, raised by Cédric upthread, about the multiple
files vs. single file.  We decided on a single file because Greg said
many little files scattered in conf.d/ would be a mess; but if we're
going to have two new items, one conf.d/ for snippets from external
tools (living in the config directory) and one for ALTER SYSTEM (living
in PGDATA), then the idea of going back to one setting per file is
perhaps not bad.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: ALTER SYSTEM SET command to change postgresql.conf parameters (RE: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review])

2013-07-30 Thread Josh Berkus
Amit,

> I had already sent the updated patch based on recent suggestions.

Yes, but there is no consensus yet on certain fundamental issues, such as:

1. what directory should postgresql.auto.conf live in?

2. should admins be able to turn it "off"?

Once we have consensus on these issues, then hopefully we can commit
your patch.  I don't *think* anyone is arguing with the code at this stage.

Note that I count the name of the file as something on which we *have*
consensus.  While I personally don't like the name, I like arguing about
names even less.  We had a vote, let's stick with what we have.

> I am really not sure if something is pending from myside, 
> so bit confused by the status of patch as Waiting On Author?

It's our policy that patches which are under spec discussion are marked
"waiting on author".

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: possible more rights to database owners

2013-07-30 Thread Pavel Stehule
2013/7/29 Pavel Stehule :
> 2013/7/29 Stephen Frost :
>> Szymon,
>>
>> * Szymon Guz (mabew...@gmail.com) wrote:
>>> On 29 July 2013 11:25, Pavel Stehule  wrote:
>>> > In 9.3 super user can cancel all queries or user can cancel own sessions.
>>> >
>>> > Is possible enhance this possibility to database owners? So owner can
>>> > cancel or can terminate sessions related to owned databases?
>>
>> Interesting idea but I'm not sure that database ownership is really how
>> we want to drive this.  I can certainly see cases where I'd like user
>> 'X' to be able to cancel queries but where that user isn't the database
>> owner.  Reducing the set of things that only a superuser can do is
>> certainly a worthwhile goal though.

Here is a implementation based on ownership - so database owner can
control execution in this his database. But it cannot to cancel or
terminate superusers.

Regards

Pavel



>
> there are two ideas:
>
> 1. some user (not super user) can terminate queries other users (not
> only own queries)
> 2. the limits are based on owning.
>
> Probably there is agreement on @1. I think so @2 is simple and natural
> - like "owner is small superuser", and it doesn't need any new
> objects.
>
> Second possibility is new kind of rights - possibility to terminate
> some other users. Possibility to terminate can be based on ownership
> or specific rights. We can support both.
>
> For me is mainly important @1. Design or implementation is not
> important - we would to decrease a usage of super user and we would to
> use more application users and only a few management users.
>
>>
>>> But this means that a db owner could cancel superuser's super important
>>> database query. Maybe let's make a default that the owner can cancel all
>>> queries except for superuser's ones. And additionaly a special right that
>>> superuser can grant it to the db owner, so the owner can cancel even
>>> superuser's queries?
>>
>> I'm not sure that I buy this argument either, particularly as
>> "superuser-ness status" can change due to a simple 'set role' and you'd
>> always have a race condition where the sending process might not realize
>> that the receiving process suddenly became a superuser process.  This
>> strikes me as an option we might attach to a role (ala create-user)
>> rather than drive it by database ownership and forget the whole thing
>> about trying to block it for superuser processes- either you can
>> terminate backends that aren't yours, or you can't.
>>
>
> yes, it is valid option - when I thinking about security - it can be
> nice possibility to REVOKE right kill own tasks to some selected
> users.
>
> Regards
>
> Pavel
>
>> Thanks,
>>
>> Stephen


owner-can-control-session.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal - Support for National Characters functionality

2013-07-30 Thread Tom Lane
"Arulappan, Arul Shaji"  writes:
> Given below is a design draft for this functionality:

> Core new functionality (new code):
> 1)Create and register independent NCHAR/NVARCHAR/NTEXT data types.

> 2)Provide support for the new GUC nchar_collation to provide the
> database with information about the default collation that needs to be
> used for the new data types.

A GUC seems like completely the wrong tack to be taking.  In the first
place, that would mandate just one value (at a time anyway) of
collation, which is surely not much of an advance over what's already
possible.  In the second place, what happens if you change the value?
All your indexes on nchar columns are corrupt, that's what.  Actually
the data itself would be corrupt, if you intend that this setting
determines the encoding and not just the collation.  If you really are
speaking only of collation, it's not clear to me exactly what this
proposal offers that can't be achieved today (with greater security,
functionality and spec compliance) by using COLLATE clauses on plain
text columns.

Actually, you really haven't answered at all what it is you want to do
that COLLATE can't do.

> 4)Because all symbols from non-UTF8 encodings could be represented as
> UTF8 (but the reverse is not true) comparison between N* types and the
> regular string types inside database will be performed in UTF8 form.

I believe that in some Far Eastern character sets there are some
characters that map to the same Unicode glyph, but that some people
would prefer to keep separate.  So transcoding to UTF8 isn't necessarily
lossless.  This is one of the reasons why we've resisted adopting ICU or
standardizing on UTF8 as the One True Database Encoding.  Now this may
or may not matter for comparison to strings that were in some other
encoding to start with --- but as soon as you base your design on the
premise that UTF8 is a universal encoding, you are sliding down a
slippery slope to a design that will meet resistance.

> 6)Client input/output of NATIONAL strings - NATIONAL strings will
> respect the client_encoding setting, and their values will be
> transparently converted to the requested client_encoding before
> sending(receiving) to client (the same mechanics as used for usual
> string types).
> So no mixed encoding in client input/output will be supported/available.

If you have this restriction, then I'm really failing to see what
benefit there is over what can be done today with COLLATE.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: ALTER SYSTEM SET command to change postgresql.conf parameters (RE: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review])

2013-07-30 Thread Greg Smith

On 7/28/13 10:49 PM, Tom Lane wrote:

Josh Berkus  writes:

I think if we can design conf.d separately for config files of management 
tools, then
it is better to have postgresql.auto.conf to be in $PGDATA rather than in
$PGDATA/conf.d



One of the biggest current complaints about recovery.conf from
Debian/Ubuntu users is the fact that it lives in $PGDATA.  Won't we just
get the same thing here?


I don't think that's the same case, but ... why exactly don't they like
recovery.conf, and can you show that the location of the file has
anything to do with the underlying complaint?


Here is a quick survey of config files:

RHEL6:
Cluster configuration file at /etc/sysconfig/pgsql, writeable by root. 
postgresql.conf is in $HOME/8.4/data


Debian Squeeze:
All configuration files in /etc/postgresql/8.4/main and are owned by the 
postgres user.


That last one shows why there's a complaint, and why it's 75% due to 
location rather than split role.  Debian users expect all of the config 
files to be relocated to /etc/postgresql or /etc/postgresql-common. 
Those files are also writeable by the postgres user.  recovery.conf 
doesn't fit that model, and people don't like that.  The admin who has 
to manage servers with puppet would be much happier if recovery.conf was 
in the /etc/postgresql tree they are already managing.  I get this 
complaint every single time I introduce an experienced Debian admin to 
PostgreSQL replication setup.  They aren't sure whether it's the 
packagers or PostgreSQL that is to blame for recovery.conf being in 
$PGDATA instead of /etc, but it's obvious to them someone has screwed up.


Given that Debian is the major example of a relocated postgresql.conf in 
the field, and that work has been done such that a) all config files are 
in /etc and b) they are all writeable by the postgres user, I don't 
understand why people are suggesting the auto.conf file or conf.d 
directory will go anywhere else on that platform anymore.  I think the 
only claims that are suggesting otherwise are thinking about an older, 
now unsupported version of the Debian packaging.  Debian packaging 
absolutely will want to relocate any new files added here into their 
/etc directory tree as well, where they will be writeable.  They will 
not go into the $PGDATA directory.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: WITH CHECK OPTION support for auto-updatable VIEWs

2013-07-30 Thread hubert depesz lubaczewski
On Tue, Jul 30, 2013 at 11:45:47AM +0100, Dean Rasheed wrote:
> Quoting the manual:
> 
> LOCAL:
>   New rows are only checked against the conditions defined directly in
> the view itself. Any conditions defined on underlying base views are
> not checked (unless they also specify the CHECK OPTION).
> 
> CASCADED:
>   New rows are checked against the conditions of the view and all
> underlying base views. If the CHECK OPTION is specified, and neither
> LOCAL nor CASCADED is specified, then CASCADED is assumed.
> 
> In particular, note the part about "unless they also specify the CHECK 
> OPTION".

Ah. All clear now. Sorry for misreading.

Best regards,

depesz



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: WITH CHECK OPTION support for auto-updatable VIEWs

2013-07-30 Thread Dean Rasheed
On 30 July 2013 11:09, hubert depesz lubaczewski  wrote:
> On Tue, Jul 30, 2013 at 09:23:19AM +0100, Dean Rasheed wrote:
>> >> > > create table some_data (id int4 primary key, payload text);
>> >> > > create view first as select * from some_data where 0 = id % 2 with 
>> >> > > local check option;
>> >> > > create view second as select * from first where 0 = id with local 
>> >> > > check option;
>> > [...]
>> >> the check is "0 = id % 3" - i.e. id has to be multiply of 3. Sorry if my
>> >> way of writing conditionals is confusing.
>> Yes it definitely looks like a typo in the test --- the definition of
>> "first" has "id % 2", so it is checking for even numbers, not for
>> numbers divisible by 3.
>
> Sorry, my bad - must have screwed copy/paste.
> the second view is:
> select * from first where 0 = id % 3 with local check option;
>
>> As for the point about which of the checks should be failing, I
>> believe that the current behaviour is correct.
>
> In such case, can you show me what is the difference of "local check"
> and "cascaded check"?
> Because I assumed, after reading the commit log, that local checks just
> the view definition of the view I'm inserting to, and the cascaded
> check, checks all the views "upstream".
>
> Given the assumption that current code works correctly - both checks
> check also the upstream view.
>

Quoting the manual:

LOCAL:
  New rows are only checked against the conditions defined directly in
the view itself. Any conditions defined on underlying base views are
not checked (unless they also specify the CHECK OPTION).

CASCADED:
  New rows are checked against the conditions of the view and all
underlying base views. If the CHECK OPTION is specified, and neither
LOCAL nor CASCADED is specified, then CASCADED is assumed.

In particular, note the part about "unless they also specify the CHECK OPTION".

It is defined this way so that if any view has a CHECK OPTION on it,
any views built on top of that can't bypass that check in any way,
they can only add to it.

Taking a specific example, view1 on top of a base table with quals Q1,
and view2 on top of view1 with quals Q2, then inserts into view2 will
check Q1 and/or Q2 depending on the CHECK OPTIONs defined on view1 and
view2 according to the following rules:

   view 1 check:  NONE   LOCAL  CASCADED
view2 check:
  NONE -  Q1   Q1
  LOCALQ2Q1+Q2Q1+Q2
  CASCADEDQ1+Q2  Q1+Q2Q1+Q2

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: WITH CHECK OPTION support for auto-updatable VIEWs

2013-07-30 Thread hubert depesz lubaczewski
On Tue, Jul 30, 2013 at 09:23:19AM +0100, Dean Rasheed wrote:
> >> > > create table some_data (id int4 primary key, payload text);
> >> > > create view first as select * from some_data where 0 = id % 2 with 
> >> > > local check option;
> >> > > create view second as select * from first where 0 = id with local 
> >> > > check option;
> > [...]
> >> the check is "0 = id % 3" - i.e. id has to be multiply of 3. Sorry if my
> >> way of writing conditionals is confusing.
> Yes it definitely looks like a typo in the test --- the definition of
> "first" has "id % 2", so it is checking for even numbers, not for
> numbers divisible by 3.

Sorry, my bad - must have screwed copy/paste.
the second view is:
select * from first where 0 = id % 3 with local check option;

> As for the point about which of the checks should be failing, I
> believe that the current behaviour is correct.

In such case, can you show me what is the difference of "local check"
and "cascaded check"?
Because I assumed, after reading the commit log, that local checks just
the view definition of the view I'm inserting to, and the cascaded
check, checks all the views "upstream".

Given the assumption that current code works correctly - both checks
check also the upstream view.

Best regards,

depesz



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: WITH CHECK OPTION support for auto-updatable VIEWs

2013-07-30 Thread Dean Rasheed
On 30 July 2013 01:24, Stephen Frost  wrote:
> depesz,
>
> * hubert depesz lubaczewski (dep...@depesz.com) wrote:
>> On Mon, Jul 29, 2013 at 07:43:53PM -0400, Stephen Frost wrote:
>> > * hubert depesz lubaczewski (dep...@depesz.com) wrote:
>> > > create table some_data (id int4 primary key, payload text);
>> > > create view first as select * from some_data where 0 = id % 2 with local 
>> > > check option;
>> > > create view second as select * from first where 0 = id with local check 
>> > > option;
> [...]
>> the check is "0 = id % 3" - i.e. id has to be multiply of 3. Sorry if my
>> way of writing conditionals is confusing.
>
> Neither client that I use to read email with saw a '% 3' on the view
> definition for 'second' in your original email (or as quoted above).
> Still, I do see what you're talking about and will take a look.
>

Yes it definitely looks like a typo in the test --- the definition of
"first" has "id % 2", so it is checking for even numbers, not for
numbers divisible by 3.

As for the point about which of the checks should be failing, I
believe that the current behaviour is correct. The relevant parts of
SQL:1999 are subclause 14.19 "Effect of inserting a table into a
viewed table", and the related subclause 14.18 "Effect of inserting a
table into a derived table". My interpretation of that is that the
CHECK OPTIONs of base relations should be checked before the CHECK
OPTIONs of outer views, which is how I coded it.

Perhaps it's worth adding a sentence to the docs to make that
explicit. So perhaps immediately before where it says "The CHECK
OPTION may not be used with RECURSIVE views.", a new paragraph saying
something like:

Note that if there is a hierarchy of views on top of other views, and
there are multiple conditions to be checked from different views in the
hierarchy, then any conditions to be checked on underlying base views
will always be checked before any conditions on higher level views.

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: ALTER SYSTEM SET command to change postgresql.conf parameters (RE: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review])

2013-07-30 Thread Cédric Villemain
Le mardi 30 juillet 2013 05:27:12, Amit Kapila a écrit :
> On Monday, July 29, 2013 7:15 PM Cédric Villemain wrote:
> > Le lundi 29 juillet 2013 13:47:57, Amit Kapila a écrit :
> > > On Sunday, July 28, 2013 11:12 AM Amit kapila wrote:
> > > > On Friday, July 26, 2013 6:18 PM Tom Lane wrote:
> > > > 
> > > > Alvaro Herrera  writes:
> > > > >> The main contention point I see is where conf.d lives; the two
> > > > >> options are in $PGDATA or together with postgresql.conf.
> > > > 
> > > > Tom
> > > > 
> > > > >> and Robert, above, say it should be in $PGDATA; but this goes
> > > > 
> > > > against
> > > > 
> > > > >> Debian packaging and the Linux FHS (or whatever that thing is
> > > > 
> > > > called).
> > > > 
> > > > > Ordinarily, if postgresql.conf is not in $PGDATA, it will be
> > > > 
> > > > somewhere
> > > > 
> > > > > that the postmaster does not (and should not) have write
> > > > > permissions for.  I have no objection to inventiny a conf.d
> > > > > subdirectory, I just
> > > > 
> > > > say
> > > > 
> > > > > that it must be under $PGDATA.  The argument that this is against
> > > > > FHS is utter nonsense, because anything we write there is not
> > > > > static configuration, it's just data.
> > > > > 
> > > > > Come to think of it, maybe part of the reason we're having such a
> > > > 
> > > > hard
> > > > 
> > > > > time getting to consensus is that people are conflating the
> > 
> > "snippet"
> > 
> > > > > part with the "writable" part?  I mean, if you are thinking you
> > > > > want system-management tools to be able to drop in configuration
> > > > > fragments
> > > > 
> > > > as
> > > > 
> > > > > separate files, there's a case to be made for a conf.d
> > > > > subdirectory
> > > > 
> > > > that
> > > > 
> > > > > lives somewhere that the postmaster can't necessarily write.  We
> > > > > just mustn't confuse that with support for ALTER SYSTEM SET.  I
> > > > > strongly believe that ALTER SYSTEM SET must not be designed to
> > > > > write anywhere outside $PGDATA.
> > > > 
> > > > I think if we can design conf.d separately for config files of
> > > > management tools, then it is better to have postgresql.auto.conf to
> > > > be in $PGDATA rather than in $PGDATA/conf.d
> > > > 
> > > > Kindly let me know if you feel otherwise, else I will update and
> > > > send patch tomorrow.
> > > 
> > > Modified patch to have postgresql.auto.conf in $PGDATA. Changes are
> > 
> > as
> > 
> > > below:
> > > 
> > > 1. initdb to create auto file in $PGDATA 2. ProcessConfigFile to open
> > > auto file from data directory, special case handling for initdb 3.
> > > AlterSystemSetConfigFile function to consider data directory as
> > > reference for operating on auto file 4. modified comments in code and
> > > docs to remove usage of config directory 5. modified function
> > > write_auto_conf_file() such that even if there is no configuration
> > > item to write, it should write header message.
> > > 
> > >This is to handle case when there is only one parameter value and
> > > 
> > > user set it to default, before this modification ,it
> > > 
> > >will write empty file.
> > 
> > I just read the patch, quickly.
> 
>   Thank you for review.
> 
> > You may split the patch thanks to validate_conf_option(), however it is
> > not a rule on postgresql-hacker.
> 
>   The review of the core functionality of patch has been done before the
> introduction of function
>   validate_conf_option() in the patch. It was introduced because there
>   were some common parts in core implementation of AlterSystem and
> set_config_option().
>   I am really not sure, after having multiple round of reviews by
> reviewers, it can add
>   significant value to split it.

Yes, it just appeared that this part was a significant one in the patch. I 
understand that it is not interesting to split now.

> 
> > Why not harcode in ParseConfigFp() that we should parse the auto.conf
> > file at the end  (and/or if USE_AUTO_CONF is not OFF)  instead of
> > hacking
> > ProcessConfigFile() with data_directory ? (data_directory should be set
> > at this
> > point) ...
> 
>   No data_directory will not be set by that time incase of initdb, when
> ProcessConfigFile()
>   is called from SelectConfigFiles()

ok.

> > just thinking, a very convenient way to enable/disable that
> > is just to add/remove the include directive in postgresql.conf. So no
> > change should be required in ParseConf at all. Except maybe
> > AbsoluteConfigLocation which should prefix the path to auto.conf.d with
> > data_directory. What I like with the include directive is that Sysadmin
> > can define some GUC *after* the auto.conf so he is sure those are not
> > 'erased' by auto.conf (or by the DBA).
> 
> I think earlier versions have this implementation, but later based on
> suggestions, I have changed it to automatic parsing of auto file after
> postgresql.conf

I probably missed those suggestions.

> > Also, it looks very interesting to stick to an one-file-for-many-GUC
> >

Re: ALTER SYSTEM SET command to change postgresql.conf parameters (RE: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review])

2013-07-30 Thread Cédric Villemain
Le lundi 29 juillet 2013 18:03:21, Alvaro Herrera a écrit :
> > Why not harcode in ParseConfigFp() that we should parse the auto.conf
> > file at the end  (and/or if USE_AUTO_CONF is not OFF)  instead of
> > hacking ProcessConfigFile() with data_directory ? (data_directory should
> > be set at this point) ... just thinking, a very convenient way to
> > enable/disable that is just to add/remove the include directive in
> > postgresql.conf. So no change should be required in ParseConf at all.
> > Except maybe AbsoluteConfigLocation which should prefix the path to
> > auto.conf.d with data_directory. What I like with the include directive
> > is that Sysadmin can define some GUC *after* the auto.conf so he is sure
> > those are not 'erased' by auto.conf (or by the DBA).
> 
> Why do you think DBAs would like an option to disable this feature?  I
> see no point in that.  And being able to relocate the parsing of
> auto.conf to be in the middle of postgresql.conf instead of at the end
> ... that seems nightmarish.  I mean, things are *already* nontrivial to
> follow, I don't see what would can come from a DBA running ALTER SYSTEM
> and wondering why their changes don't take.

I don't find that hard to do nor to understand, but if that has already reach a 
consensus then let's do that.

> > Also, it looks very interesting to stick to an one-file-for-many-GUC when
> > we absolutely don't care : this file should (MUST ?) not be edited by
> > hand. The thing achieve is that it limits the access to ALTER SYSTEM.
> > One file per GUC allows to LWlock only this GUC, isn't it ? (and also
> > does not require machinery for holding old/new auto GUC, or at least
> > more simple).
> 
> This has already been debated, and we have already reached consensus
> (one file to rule them all).  I don't think it's a good idea to go over
> all that discussion again.

ok, I've only lost track for the consensus based on the technical objective.

-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation


signature.asc
Description: This is a digitally signed message part.


Re: [HACKERS] Patch for reserved connections for replication users

2013-07-30 Thread Gibheer
Hi,

here is an update off my patch based on the discussion with Marko
Tiikkaja and Andres Freund.

Marko and I had the idea of introducing reserved connections based on
roles as it would create a way to garantuee specific roles to connect
when other roles use up all connections for whatever reason. But
Andreas said, that it would make connecting take much too long.

So to just fix the issue at hand, we decided that adding
max_wal_senders to the pool of reserved connections is better. With
that, we are sure that streaming replication can connect to the master.

So instead of creating a new configuration option I added
max_wal_senders to the reserved connections and changed the check for
new connections.

The test.pl is a small script to test, if the patch does what it should.

regards,

Stefan Radomskidiff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 23ebc11..2ba98e2 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -2170,8 +2170,10 @@ include 'filename'
 processes). The default is zero, meaning replication is
 disabled. WAL sender processes count towards the total number
 of connections, so the parameter cannot be set higher than
-.  This parameter can only
-be set at server start. wal_level must be set
+. Like
+ this option reserves
+connections from . This parameter
+can only be set at server start. wal_level must be set
 to archive or hot_standby to allow
 connections from standby servers.

diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c
index 2c7f0f1..3194894 100644
--- a/src/backend/utils/init/postinit.c
+++ b/src/backend/utils/init/postinit.c
@@ -436,7 +436,7 @@ InitializeMaxBackends(void)
 
 	/* the extra unit accounts for the autovacuum launcher */
 	MaxBackends = MaxConnections + autovacuum_max_workers + 1 +
-		+ max_worker_processes;
+		+ max_worker_processes + max_wal_senders;
 
 	/* internal error because the values were all checked previously */
 	if (MaxBackends > MAX_BACKENDS)
@@ -705,7 +705,7 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,
 	 * don't allow them to consume the reserved slots, which are intended for
 	 * interactive use.
 	 */
-	if ((!am_superuser || am_walsender) &&
+	if ((!am_superuser && !am_walsender) &&
 		ReservedBackends > 0 &&
 		!HaveNFreeProcs(ReservedBackends))
 		ereport(FATAL,


test.pl
Description: Perl program

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal - Support for National Characters functionality

2013-07-30 Thread Arulappan, Arul Shaji
> -Original Message-
> From: Tatsuo Ishii [mailto:is...@postgresql.org]
> 
> 
> Also I don't understand why you need UTF-16 support as a database
encoding
> because UTF-8 and UTF-16 are logically equivalent, they are just
different
> represention (encoding) of Unicode. That means if we already support
UTF-8
> (I'm sure we already do), there's no particular reason we need to add
UTF-16
> support.
> 
> Maybe you just want to support UTF-16 as a client encoding?

Given below is a design draft for this functionality:

Core new functionality (new code):
1)Create and register independent NCHAR/NVARCHAR/NTEXT data types.

2)Provide support for the new GUC nchar_collation to provide the
database with information about the default collation that needs to be
used for the new data types.

3)Create encoding conversion subroutines to convert strings between the
database encoding and UTF8 (from national strings to regular strings and
back).
PostgreSQL already have all required support (used for conversion
between the database encoding and client_encoding), so amount of the new
code will be minimal there.

4)Because all symbols from non-UTF8 encodings could be represented as
UTF8 (but the reverse is not true) comparison between N* types and the
regular string types inside database will be performed in UTF8 form. To
achieve this feature the new IMPLICIT casts may need to be created:
NCHAR -> CHAR
NVARCHAR -> VARCHAR
NTEXT -> TEXT.

Casting in the reverse direction will be available too but only as
EXPLICIT.
However, these casts could fail if national strings could not be
represented in the used database encoding.

All these casts will use subroutines created in 3).

Casting/conversion between N* types will follow the same rules/mechanics
as used for casting/conversion between usual (CHAR(N)/VARCHAR(N)/TEXT)
string types.


5)Comparison between NATIONAL string values will be performed via
specialized UTF8 optimized functions (with respect of the
nchar_collation setting).


6)Client input/output of NATIONAL strings - NATIONAL strings will
respect the client_encoding setting, and their values will be
transparently converted to the requested client_encoding before
sending(receiving) to client (the same mechanics as used for usual
string types).
So no mixed encoding in client input/output will be supported/available.

7)Create set of the regression tests for these new data types.


Additional changes:
1)ECPG support for these new types
2) Support in the database drivers for the data types.

Rgds,
Arul Shaji




> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_en.php
> Japanese: http://www.sraoss.co.jp




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers