* Tom Lane ([EMAIL PROTECTED]) wrote:
Stephen Frost [EMAIL PROTECTED] writes:
* Bruce Momjian ([EMAIL PROTECTED]) wrote:
Would you show an example of the invalid value this is trying to avoid?
Well, the way I discovered the problem was by sending a timestamp in
double format when
* Bruce Momjian ([EMAIL PROTECTED]) wrote:
Stephen Frost wrote:
-- Start of PGP signed section.
* Bruce Momjian ([EMAIL PROTECTED]) wrote:
Would you show an example of the invalid value this is trying to avoid?
Well, the way I discovered the problem was by sending a timestamp
* Bruce Momjian ([EMAIL PROTECTED]) wrote:
Stephen Frost wrote:
I'll see about writing up a proper test case/schema. Looks like I'm
probably most of the way there at this point, really. ;)
I wasn't aware you could throw binary values into the timestamp fields
like that. I thought you
* Bruce Momjian ([EMAIL PROTECTED]) wrote:
Considering all the other things the database is doing, I can't imagine
that would be a measurable improvement.
It makes it easier on my client program too which is listening to an
ethernet interface and trying to process all of the packets coming in
* Bruce Momjian ([EMAIL PROTECTED]) wrote:
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
I wasn't aware you could throw binary values into the timestamp fields
like that. I thought you needed to use a C string for the value.
This facility was added in 7.4 as part of the
* Tom Lane ([EMAIL PROTECTED]) wrote:
I said:
I'll make a note to do something with this issue after the TZ patch
is in.
I've applied a patch to take care of this problem.
Great, thanks, much appriciated. I'll test once 7.5 goes beta.
Stephen
signature.asc
Description: Digital
Greetings,
Small patch to move get_grosysid() from catalog/aclchk.c to
utils/cache/lsyscache.c where it can be used by other things. Also
cleans up both get_usesysid() and get_grosysid() a bit. This is in
preparation for 'Group Ownership' support.
Thanks,
* Tom Lane ([EMAIL PROTECTED]) wrote:
Stephen Frost [EMAIL PROTECTED] writes:
Small patch to clean up the grammer a bit by adding 'GroupId',
'SchemaName' and 'SavePointId'.
I don't particularly see the value of this --- especially since the
direction of future development is likely
* Tom Lane ([EMAIL PROTECTED]) wrote:
Stephen Frost [EMAIL PROTECTED] writes:
Do you agree with the other changes (ColId - SchemaName, ColId -=20
SavePointId) ?
I don't really see the value of them. They add some marginal
documentation I suppose, but they also make the grammar bigger
* Tom Lane ([EMAIL PROTECTED]) wrote:
Given other discussion, it might be best to rename it to RoleId and use
that for both users and groups.
Ok, should I change SchemaName SavePointId back to ColId, leave them
as in the patch, change them to RoleId, or something else? Neither
ColId nor
* Tom Lane ([EMAIL PROTECTED]) wrote:
Stephen Frost [EMAIL PROTECTED] writes:
Ok, should I change SchemaName SavePointId back to ColId,
I'd just leave them as ColId. I don't think much would be gained by
introducing those productions.
Done, here's the patch.
Thanks
is introduced.
Thanks,
Stephen
Stephen Frost wrote:
* Tom Lane ([EMAIL PROTECTED]) wrote:
Stephen Frost [EMAIL PROTECTED] writes:
Ok, should I change SchemaName SavePointId back to ColId,
I'd just leave them as ColId. I don't think much would be gained
Greetings,
Attached please find a patch to change how the permissions checking
for alter-owner is done. With roles there can be more than one
'owner' of an object and therefore it becomes sensible to allow
specific cases of ownership change for non-superusers.
The permission checks
* Tom Lane ([EMAIL PROTECTED]) wrote:
Stephen Frost [EMAIL PROTECTED] writes:
Tom, if you're watching, are you working on this? I can probably spend
some time today on it, if that'd be helpful.
I am not; I was hoping you'd deal with SET ROLE. Is it really much
different from SET SESSION
* Petr Jelinek ([EMAIL PROTECTED]) wrote:
+ if (!(superuser()
+ || ((Form_pg_database) GETSTRUCT(tuple))-datdba ==
GetUserId()))
+ aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_DATABASE,
+stmt-dbname);
This should almost
* Bruce Momjian (pgman@candle.pha.pa.us) wrote:
This patch disables page writes to WAL when fsync is off, because with
no fsync guarantee, the page write recovery isn't useful.
This doesn't seem quite right to me. What happens with PITR? And
Postgres crashes? While many people seriously
* Tom Lane ([EMAIL PROTECTED]) wrote:
Stephen Frost [EMAIL PROTECTED] writes:
Attached please find a patch to change how the permissions checking
for alter-owner is done. With roles there can be more than one
'owner' of an object and therefore it becomes sensible to allow
* Tom Lane ([EMAIL PROTECTED]) wrote:
BTW, I realized we do not support granting roles to PUBLIC:
regression=# create role r;
CREATE ROLE
regression=# grant r to public;
ERROR: role public does not exist
but as far as I can tell SQL99 expects this to work.
Indeed, I believe you're
* Tom Lane ([EMAIL PROTECTED]) wrote:
Another issue: I like the has_role() function and in fact think it needs
to come in multiple variants just like has_table_privilege and friends:
has_role(name, name)
has_role(name, oid)
has_role(oid, name)
has_role(oid, oid)
Greetings,
The following patch implements individual privileges for TRUNCATE,
VACUUM and ANALYZE. Includes documentation and regression test
updates. Resolves TODO item 'Add a separate TRUNCATE permission'.
Created off of current (2005/01/03) CVS TIP.
At least the 'no one interested
* Tom Lane ([EMAIL PROTECTED]) wrote:
Stephen Frost [EMAIL PROTECTED] writes:
The following patch implements individual privileges for TRUNCATE,
VACUUM and ANALYZE. Includes documentation and regression test
updates. Resolves TODO item 'Add a separate TRUNCATE permission
* daveg ([EMAIL PROTECTED]) wrote:
We rely heavily on truncate as delete for large numbers of rows is very
costly. An example, we copy_in batches of rows from several sources through
the day to a pending work table, with another process periodically
processing the rows and sweeping them into
* Tom Lane ([EMAIL PROTECTED]) wrote:
Stephen Frost [EMAIL PROTECTED] writes:
It seems like pg_restore really should be able to handle COPY errors
correctly by skipping to the end of the COPY data segment when the
initial COPY command comes back as an error.
Send a patch
* Stephen Frost ([EMAIL PROTECTED]) wrote:
* Tom Lane ([EMAIL PROTECTED]) wrote:
Stephen Frost [EMAIL PROTECTED] writes:
It seems like pg_restore really should be able to handle COPY errors
correctly by skipping to the end of the COPY data segment when the
initial COPY command
* Stephen Frost ([EMAIL PROTECTED]) wrote:
Needs to be changed to handle whitespace in front of the actual 'COPY',
unless someone else has a better idea. This should be reasonably
trivial to do though... If you'd like me to make that change and send
in a new patch, just let me know.
Fixed
of the PostgreSQL committers reviews
and approves it.
Great! It'd be really nice to have this fix in 8.1.3... :)
Thanks again,
Stephen
---
Stephen Frost wrote:
* Stephen Frost ([EMAIL PROTECTED
* Bruce Momjian (pgman@candle.pha.pa.us) wrote:
Stephen Frost wrote:
-- Start of PGP signed section.
* Bruce Momjian (pgman@candle.pha.pa.us) wrote:
Stephen Frost wrote:
Great! It'd be really nice to have this fix in 8.1.3... :)
No, it will not be in 8.1.3. It is a new feature
* Andrew Dunstan ([EMAIL PROTECTED]) wrote:
this is a hopeless way of giving a reference. Many users don't keep list
emails. If you want to refer to a previous post you should give a
reference to the web archives.
Sorry, I'm actually pretty used to using Message IDs for references (we
do it
* Tom Lane ([EMAIL PROTECTED]) wrote:
Bruce Momjian pgman@candle.pha.pa.us writes:
Andrew Dunstan wrote:
I assume you are referring to this post:
http://archives.postgresql.org/pgsql-bugs/2006-01/msg00188.php
OK, that helps. The solution is to not do that, meaning install
postgis
* Andrew Dunstan ([EMAIL PROTECTED]) wrote:
Tom Lane said:
Also, it might be possible to depend on whether libpq has entered the
copy in state. I'm not sure this works very well, because if there's
an error in the COPY command itself (not in the following data) then we
probably don't
* Tom Lane ([EMAIL PROTECTED]) wrote:
ISTM you should be depending on the archive structure: at some level,
at least, pg_restore knows darn well whether it is dealing with table
data or SQL commands.
Alright, to do this, ExecuteSqlCommandBuf would need to be modified to
return an error-code
* Stephen Frost ([EMAIL PROTECTED]) wrote:
* Tom Lane ([EMAIL PROTECTED]) wrote:
ISTM you should be depending on the archive structure: at some level,
at least, pg_restore knows darn well whether it is dealing with table
data or SQL commands.
[...]
I'd be happy to work this up, and I
* Tom Lane ([EMAIL PROTECTED]) wrote:
Stephen Frost [EMAIL PROTECTED] writes:
* Tom Lane ([EMAIL PROTECTED]) wrote:
This is not super surprising because the original design approach for
pg_restore was bomb out on any sort of difficulty whatsoever. That
was justly complained about
* Tom Lane ([EMAIL PROTECTED]) wrote:
I agree. I wonder if it wouldn't be cleaner to pass the information in
the other direction, ie, send a boolean down to PrintTocData saying you
are sending SQL commands or you are sending COPY data. Then, instead
of depending only on the libpq state to
* Tom Lane ([EMAIL PROTECTED]) wrote:
Stephen Frost [EMAIL PROTECTED] writes:
I believe the attached patch does this now. Under my test case it
correctly handled things. I'm certainly happier with it this way and
apologize for not realizing this better approach sooner. Please
comment
Greetings,
The attached patch fixes a bug which was originally brought up in May
of 2002 in this thread:
http://archives.postgresql.org/pgsql-interfaces/2002-05/msg00083.php
The original bug reporter also supplied a patch to fix the problem:
* Tom Lane ([EMAIL PROTECTED]) wrote:
Stephen Frost [EMAIL PROTECTED] writes:
The attached patch fixes a bug which was originally brought up in May
of 2002 in this thread:
Now that I've looked at it, I find this patch seems fairly wrongheaded.
AFAICS the entire point of the original
* Tom Lane ([EMAIL PROTECTED]) wrote:
Stephen Frost [EMAIL PROTECTED] writes:
I have little idea of how expensive the operations called by
pg_krb5_init really are. If they are expensive then it'd probably
make sense to keep the current static variables but treat 'em as a
one-element
* Tom Lane ([EMAIL PROTECTED]) wrote:
Right offhand I like the idea of pushing it into connectOptions2 --- can
you experiment with that? Seems like there is no reason to call
Kerberos if the user supplies the name to connect as.
Patch attached. After looking through the code around this I
* Tom Lane ([EMAIL PROTECTED]) wrote:
This is probably not a good idea --- changing the API behavior in
pursuit of saving a few cycles is just going to get people mad at us.
Fair enough.
I think we'd have to refactor the code so that PQsetdbLogin gets a
PQconninfoOption array, overrides
Greetings,
Please find below a patch to add the array_accum aggregate as a
built-in using two new C functions defined in array_userfuncs.c.
These functions simply expose the pre-existing efficient array
building routines used elsewhere in the backend (accumArrayResult
and
* Neil Conway ([EMAIL PROTECTED]) wrote:
On Wed, 2006-10-11 at 00:51 -0400, Stephen Frost wrote:
Here, the actual state type for any aggregate call is the array type
!having the actual input type as elements. Note: array_accum() is now
!a built-in aggregate which uses a much
* Tom Lane ([EMAIL PROTECTED]) wrote:
(However, now that we support nulls in arrays, meseems a more consistent
definition would be that it allows null inputs and just includes them in
the output. So probably you do need it non-strict.)
This was my intention.
I'm inclined to think that this
* Tom Lane ([EMAIL PROTECTED]) wrote:
That's not really the flavor of solution I'd like to have. Ideally,
it'd actually *work* to write
my_ffunc(my_sfunc(my_sfunc(null, 1), 2))
and get the same result as aggregating over the values 1 and 2. The
trick is to make sure that my_sfunc
* Tom Lane ([EMAIL PROTECTED]) wrote:
Stephen Frost [EMAIL PROTECTED] writes:
The other issue is, in the above scenario
is it acceptable to modify the result of my_sfunc(null, 1) in the ,2
call?
Yes, because the only place a nonnull value of the type could have come
from is a my_sfunc
* Henry B. Hotz ([EMAIL PROTECTED]) wrote:
On Jun 22, 2007, at 9:56 AM, Magnus Hagander wrote:
Most likely it's just checking the keytab to find a principal with the
same name as the one presented from the client. Since one is
present, it
loads it up automatically, and verifies against it.
* Gregory Stark ([EMAIL PROTECTED]) wrote:
Joe Conway [EMAIL PROTECTED] writes:
If there are no objections I'll commit this later today.
My objection is that I think we should still revoke access for non-superuser
by default. The patch makes granting execute reasonable for most users but
* Gregory Stark ([EMAIL PROTECTED]) wrote:
Actually from a security point of view revoking public execute is pretty much
the same as making a function super-user-only. The only difference is how much
of a hassle it is for the super-user to grant access. Perhaps we should
reconsider whether any
* Joe Conway ([EMAIL PROTECTED]) wrote:
If you are going to argue that we should revoke access for non-superusers
by default for dblink, then you are also arguing that we should do the same
for every function created with any untrusted language.
Uh, no, one doesn't imply the other. It
* Joe Conway ([EMAIL PROTECTED]) wrote:
Consider a scenario like package x uses arbitrary function y in an
untrusted language z. Exact same concerns arise.
No, it doesn't... Said arbitrary function in y, in untrusted language
z, could be perfectly safe for users to call. Being written in an
* Joe Conway ([EMAIL PROTECTED]) wrote:
Stephen Frost wrote:
No, it doesn't... Said arbitrary function in y, in untrusted language
z, could be perfectly safe for users to call.
^
*Could* be. But we just said that the admin was not interested in reading
the documentation, and has
* Gregory Stark ([EMAIL PROTECTED]) wrote:
Joe Conway [EMAIL PROTECTED] writes:
Consider a scenario like package x uses arbitrary function y in an
untrusted language z. Exact same concerns arise.
Well arbitrary function may or may not actually do anything that needs to be
restricted.
* Joe Conway ([EMAIL PROTECTED]) wrote:
Stephen Frost wrote:
I see.. So all the functions in untrusted languages that come with PG
initially should be checked over by every sysadmin when installing PG
every time... And the same for PostGIS, and all of the PL's that use
untrusted languages
* Joe Conway ([EMAIL PROTECTED]) wrote:
Get serious. Internal functions are specifically designed and maintained to
be safe within the confines of the database security model. We are
discussing extensions to the core, all of which must be installed by
choice, by a superuser.
Extensions
* Tom Lane ([EMAIL PROTECTED]) wrote:
Joe Conway [EMAIL PROTECTED] writes:
But if you know of a security risk related to using libpq
with a password authenticated connection, let's hear it.
As near as I can tell, the argument is that dblink might be used to send
connection-request
* Magnus Hagander ([EMAIL PROTECTED]) wrote:
Here's an updated version of this patch. This version has full SSPI support
in the server as well, so I can do both kerberos and NTLM between two
windows machines using the negotiate method.
Great! Also, I've tested that it works under Windows
* Magnus Hagander ([EMAIL PROTECTED]) wrote:
On Thu, Jul 19, 2007 at 06:22:57PM -0400, Stephen Frost wrote:
My thinking would be to have the autoconf to disable it, but enable it
by default. I don't feel particularly strongly about it though.
Do you see a use-case where someone would
Greetings,
Please find attached a minor patch to remove the constraints that a
user can't include the delimiter or quote characters in a 'NULL AS'
string when importing CSV files.
This allows a user to explicitly request that NULL conversion happen
on fields which are quoted. As the
* Gregory Stark ([EMAIL PROTECTED]) wrote:
Tom Lane [EMAIL PROTECTED] writes:
Stephen Frost [EMAIL PROTECTED] writes:
Please find attached a minor patch to remove the constraints that a
user can't include the delimiter or quote characters in a 'NULL AS'
string when importing CSV
* Tom Lane ([EMAIL PROTECTED]) wrote:
Anybody think this is good, bad, or silly? Does the issue need
explicit documentation, and if so where and how?
I'm going to have to vote 'silly' on this one. While I agree that in
general we should discourage, and not provide explicit command-line
Tom, et al,
* Tom Lane ([EMAIL PROTECTED]) wrote:
I'm not sure where we go from here. Your GSOC student has disappeared,
right? Is anyone else willing to take up the patch and work on it?
I'm willing to take it up and work on it. I'm very interested in having
column-level privileges in PG.
* Andrew Dunstan ([EMAIL PROTECTED]) wrote:
Tom Lane wrote:
I'm not sure where we go from here. Your GSOC student has disappeared,
right? Is anyone else willing to take up the patch and work on it?
No, he has not disappeared at all. He is going to work on fixing issues
and getting the
* daveg ([EMAIL PROTECTED]) wrote:
The feature that the proposed patch enables is to create pg_dump custom
format archives for multiple tables with a predicate. No amount of csv or
xml will do that. Contrived example:
Uh, pg_dump's custom format really isn't particularly special, to be
honest.
Simon,
I agree with adding these options in general, since I find myself
frustrated by having to vi huge dumps to change simple schema things.
A couple of comments on the patch though:
- Conflicting option handling
I think we are doing our users a disservice by putting it on them to
Simon,
* Simon Riggs ([EMAIL PROTECTED]) wrote:
On Sun, 2008-07-20 at 05:47 +0100, Simon Riggs wrote:
On Sat, 2008-07-19 at 23:07 -0400, Stephen Frost wrote:
[...]
- Conflicting option handling
Thanks for putting in the extra code to explicitly indicate which
conflicting options were
* Simon Riggs ([EMAIL PROTECTED]) wrote:
On Sun, 2008-07-20 at 17:43 -0400, Stephen Frost wrote:
Even this doesn't cover everything though- it's too focused on tables
and data loading. Where do functions go? What about types?
Yes, it is focused on tables and data loading. What about
* daveg ([EMAIL PROTECTED]) wrote:
One observation, indexes should be built right after the table data
is loaded for each table, this way, the index build gets a hot cache
for the table data instead of having to re-read it later as we do now.
That's not how pg_dump has traditionally worked,
* Simon Riggs ([EMAIL PROTECTED]) wrote:
The options split the dump into 3 parts that's all: before the load, the
load and after the load.
--schema-pre-load says
Dumps exactly what option--schema-only/ would dump, but only those
statements before the data load.
What is it you are
Simon,
* Simon Riggs ([EMAIL PROTECTED]) wrote:
I hadn't realized that Simon was using pre-schema and post-schema
to name the first and third parts. I'd agree that this is confusing
nomenclature: it looks like it's trying to say that the data is the
schema, and the schema is not! How
Tom, et al,
* Tom Lane ([EMAIL PROTECTED]) wrote:
Ah, I see. No objection to those switch names, at least assuming we
want to stick to positive-logic switches. What did you think of the
negative-logic suggestion (--omit-xxx)?
My preference is for positive-logic switches in general. The
Simon,
* Simon Riggs ([EMAIL PROTECTED]) wrote:
...and with command line help also.
The documentation and whatnot looks good to me now. There are a couple
of other issues I found while looking through and testing the patch
though-
Index: src/bin/pg_dump/pg_dump.c
* Simon Riggs ([EMAIL PROTECTED]) wrote:
The key capability here is being able to split the dump into multiple
pieces. The equivalent capability on restore is *not* required, because
once the dump has been split the restore never needs to be. It might
seem that the patch should be symmetrical
* Tom Lane ([EMAIL PROTECTED]) wrote:
Simon Riggs [EMAIL PROTECTED] writes:
On Fri, 2008-07-25 at 19:16 -0400, Tom Lane wrote:
The key problem is that pg_restore is broken:
The key capability here is being able to split the dump into multiple
pieces. The equivalent capability on restore
* Tom Lane ([EMAIL PROTECTED]) wrote:
Stephen Frost [EMAIL PROTECTED] writes:
I dislike, and doubt that I'd use, this approach. At the end of the
day, it ends up processing the same (very large amount of data) multiple
times.
Well, that's easily avoided: just replace the third step
* Tom Lane ([EMAIL PROTECTED]) wrote:
Right, but the parallelization is going to happen sometime, and it is
going to happen in the context of pg_restore. So I think it's pretty
silly to argue that no one will ever want this feature to work in
pg_restore.
I think you've about convinced me on
* Joshua D. Drake ([EMAIL PROTECTED]) wrote:
Custom format rocks for partial set restores from a whole dump. See the
TOC option :)
I imagine it does, but that's very rarely what I need. Most of the time
we're dumping out a schema to load it into a seperate schema (usually on
another host).
76 matches
Mail list logo