Re: [HACKERS] strange behaviour of psql \e command

2016-01-01 Thread Neil Tiffin

> On Jan 1, 2016, at 11:14 AM, Tom Lane  wrote:
> 
> Oleg Bartunov  writes:
>> I noticed, that psql's \e command doesn't worked for me complaining 'error
>> opening file'. I did research and found that following setting in joe's (if
>> I choose editor as joe) configuraton causes the problem:
>> -exask ^KX always confirms file name
>> That worked for decades :), so I'm wondering what was changed either in El
>> Capitan or psql code ?
> 
> I don't recall that we've changed anything much at all about \e recently,
> so probably the way to bet is that the El Capitan update broke something
> :-(.  Can anyone else confirm?
> 

Seems to work fine for me with vi on OS X 10.11.2 (15C50) and VIM - Vi IMproved 
7.3 (2010 Aug 15, compiled Oct 27 2015 16:22:14) Compiled by r...@apple.com

Nell

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [patch] to build docs on Mac OS X El Capitan with MacPorts

2015-11-03 Thread Neil Tiffin
I should add that this was required for a postgres git build using MacPorts to 
supply dependencies and not a build of postgres using MacPorts.

Neil

> On Nov 3, 2015, at 8:11 AM, Robert Haas  wrote:
> 
> On Sun, Nov 1, 2015 at 8:41 AM, Neil Tiffin  wrote:
>> The attached patch was required to get the docs to build on Mac OS X 10.11.1 
>> (15B42) with MacPorts 2.3.4.  After changing docbook.m4 ‘autoreconf’ has to 
>> be run.  This patch does not include the new version of ‘configure'
> 
> Can anyone else verify this?



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [patch] to build docs on Mac OS X El Capitan with MacPorts

2015-11-01 Thread Neil Tiffin
The attached patch was required to get the docs to build on Mac OS X 10.11.1 
(15B42) with MacPorts 2.3.4.  After changing docbook.m4 ‘autoreconf’ has to be 
run.  This patch does not include the new version of ‘configure'

As explained in the devel documentation (J.2.6) the docs could be made to build 
by adding ‘export DOCBOOKSTYLE=/opt/local/share/dsssl/docbook-dsssl'

However, in J.2.4 it implies that just using MacPorts will work without 
additional setup.  This patch makes that happen.

diff --git a/config/docbook.m4 b/config/docbook.m4
index 4304fa7..f9d 100644
--- a/config/docbook.m4
+++ b/config/docbook.m4
@@ -55,7 +55,7 @@ AC_CACHE_VAL([pgac_cv_path_stylesheets],
 [if test -n "$DOCBOOKSTYLE"; then
   pgac_cv_path_stylesheets=$DOCBOOKSTYLE
 else
-  for pgac_prefix in /usr /usr/local /opt /sw; do
+  for pgac_prefix in /usr /usr/local /opt /opt/local /sw; do
 for pgac_infix in share lib; do
   for pgac_postfix in \
 sgml/stylesheets/nwalsh-modular \
@@ -65,7 +65,8 @@ else
 sgml/docbook/dsssl/modular \
 sgml/docbook/stylesheet/dsssl/modular \
 sgml/docbook/dsssl-stylesheets \
-sgml/dsssl/docbook-dsssl-nwalsh
+sgml/dsssl/docbook-dsssl-nwalsh \
+dsssl/docbook-dsssl
   do
 pgac_candidate=$pgac_prefix/$pgac_infix/$pgac_postfix
 if test -r "$pgac_candidate/html/docbook.dsl" \





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RFC: Remove contrib entirely

2015-06-05 Thread Neil Tiffin

> On Jun 4, 2015, at 3:11 PM, David E. Wheeler  wrote:
> 
> On Jun 4, 2015, at 11:53 AM, Neil Tiffin  wrote:
> 
>> I have looked at PGXN and would never install anything from it.  Why?  
>> Because it is impossible to tell, without inside knowledge or a lot of work, 
>> what is actively maintained and tested, and what is an abandoned 
>> proof-of-concept or idea.
> 
> Well, you can see the last release dates for a basic idea of that sort of 
> thing. Also the release status (stable, unstable, testing).
> 
>> There is no indication of what versions of pg any of PGXN modules are tested 
>> on, or even if there are tests that can be run to prove the module works 
>> correctly with a particular version of pg.
> 
> Yeah, I’ve been meaning to integrate http://pgxn-tester.org/ results for all 
> modules, which would help with that. In the meantime you can hit that site 
> itself. Awesome work by Tomas Vondra.
> 
>> There are many modules that have not been updated for several years.  What 
>> is their status?  If they break is there still someone around to fix them or 
>> even cares about them?  If not, then why waste my time.
> 
> These are challenges to open-source software in general, and not specific to 
> PGXN.

Of course, but the solution is having tools to easily determine the risk.  The 
fact that the modules pass or fail the tests on pgxn-tester is a significant 
step.  Knowing how long the module has been failing would be even better.

> 
>> So adding to Jim’s comment above, anything that vets or approves PGXN 
>> modules is, in my opinion, essentially required to make PGXN useful for 
>> anything other than a scratchpad.
> 
> Most of the distributions on PGXN feature links to their source code 
> repositories.
> 
>> A big help would be to pull in the date of the last git commit in the module 
>> overview and ask the authors to edit the readme to add what major version of 
>> pg the author last tested or ran on.
> 
> That’s difficult to maintain; I used to do it for pgTAP, was too much work. 
> pgxn-tester.org is a much better idea.

Yes it is.

Wow, that is awesome work (pgxn-tester.org).  Thanks Tomas Vondra, and David 
for pointing it out.  This improved my opinion of PGXN significantly.  It might 
be helpful to at least put a link on the PGXN home page, beta or not, its 
awesome and even in beta it shows the future direction.

It would be nice to see the development branch in the tests.  One project I am 
working on now targets 9.5.

It is important to know how long a stable module has been failing for a 
specific version of pg.  This is IMO a critical measure of the level of support 
a module is receiving. 

Neil




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RFC: Remove contrib entirely

2015-06-04 Thread Neil Tiffin

> On Jun 4, 2015, at 10:55 AM, Jim Nasby  wrote:
> 
> Personally, I'd rather we publish a list of formally vetted and approved 
> versions of PGXN modules. There are many benefits to that, and the downside 
> of not having that stuff as part of make check would be overcome by the 
> explicit testing we would need to have for approved modules.

I have looked at PGXN and would never install anything from it.  Why?  Because 
it is impossible to tell, without inside knowledge or a lot of work, what is 
actively maintained and tested, and what is an abandoned proof-of-concept or 
idea.  There is no indication of what versions of pg any of PGXN modules are 
tested on, or even if there are tests that can be run to prove the module works 
correctly with a particular version of pg.  There are many modules that have 
not been updated for several years.  What is their status?  If they break is 
there still someone around to fix them or even cares about them?  If not, then 
why waste my time.

So adding to Jim’s comment above, anything that vets or approves PGXN modules 
is, in my opinion, essentially required to make PGXN useful for anything other 
than a scratchpad.  A big help would be to pull in the date of the last git 
commit in the module overview and ask the authors to edit the readme to add 
what major version of pg the author last tested or ran on.

When I install from contrib, at least I have some minimal assurance that the 
code is meant to work with the corresponding version of pg.

Neil

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgaudit - an auditing extension for PostgreSQL

2015-02-18 Thread Neil Tiffin
Stephen,

I meant it to go to the list, but hit the wrong button.

> On Feb 17, 2015, at 7:01 PM, Stephen Frost  wrote:
> 
> Neil,
> 
> I noticed that you email'd me directly on this reply.  Not sure if you
> intended to or not, but I'm fine with my response going to the list.
> 
> * Neil Tiffin (ne...@neiltiffin.com) wrote:
>>> On Feb 17, 2015, at 1:10 PM, Stephen Frost  wrote:
>>> If the DB account isn't a superuser then everything changes.  There's no
>>> point fighting with the OS user- they can run some other PG binary which
>>> they've copied over locally and run SQL with that, or copy all the files
>>> over to another server which they have complete access to.  The fact
>>> that they can also connect to the DB and run SQL isn’t really an issue.
>> 
>> Thats the point. If this environment matters then the db superuser would not 
>> be an authorized os superuser (with root or even close privileges). And no, 
>> they could not be running some other binary.
>> 
>> One way to do pg superuser auditing is to utilize a file (outside of the pg 
>> data directory, which probably violates something in pg) like 
>> /etc/pg_su_audit that has os root rw and r for all others (or the equivalent 
>> for other os’s) containing a URL.  If this file is present, send all db 
>> superuser usage to the URL.  If this file is present with the wrong 
>> privileges, then don’t start pg. Done.  No configuration in pg config files, 
>> no GUCs, no nothing for the pg superuser to mess around with, not tables, no 
>> ability for any pg superuser to configure or control.  
> 
> This approach doesn't violate anything in PG and can be used with any of
> the pgaudit approaches being discussed.  The fact that it's
> postgresql.conf, which represents GUCs, doesn't change anything
> regarding what you’re getting it.
> 

It changes everything, pg superusers have complete control of files in the pg 
data directory.  If set up correctly pg superusers have no control in /etc. 

>> If they can copy or install a PG binary, then the OS auditing and security 
>> failed. PG code need not be concerned.
>> 
>> Sure someone could still break access to the URL, but again, not PG’s 
>> concern.  Other security and auditing would have responsibility to pick that 
>> up.
>> 
>> It is a really simple use case, record everything any db superuser does and 
>> send it to the audit system.  Done.  Then a designated role can control what 
>> gets audited in production.  As long as everything the db superuser does can 
>> be written to an audit log, then it no longer matters technically if the db 
>> superuser can change the rest of the auditing.  If they do and it violates 
>> policy, then when the audit picks it up, they lose their job plus depending 
>> on the environment. 
> 
> The issue is really around what we claim to provide with this auditing.
> We can't claim to provide *full* superuser auditing with any of these
> approaches since the superuser can disable auditing.  We can, however,
> provide auditing up until that happens, which is likely to be sufficient
> in many environments.  For those environments where full superuser
> auditing is required, an independent system must be used.
> 
> Of course, it's important that any auditing mechanism which is used to
> audit superusers be impossible for the superuser to modify after the
> fact, meaning that syslog or similar needs to be used.
> 

I’m still confused since you do do not differentiate between db superuser and 
os superuser and what you mean by an independent system?

With the scheme I described above, how does the db superuser disable auditing 
without os root privileges?  If they can, then pg security is fundamentally 
broken, and I don’t believe it is.

How can an independent system monitor what commands are issued inside the 
database?

I understand my comments do not cover what is being proposed or worked on and 
that is fine.  But saying it does not have value because the superuser could 
disable any system in pg, is wrong IMO.  Being able to reliability log db 
superusers without their ability to change the logging would be a fundamentally 
good security tool as companies become more serious about internal security.  
This is, and will be more, important since a lot of people consider insider 
breaches the biggest security challenge today.

Neil






-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgaudit - an auditing extension for PostgreSQL

2015-02-17 Thread Neil Tiffin

> On Feb 17, 2015, at 3:40 AM, Yeb Havinga  wrote:
> 
> Hi list,
> 
. . . . . 

> Auditing superuser access means auditing beyond the running database.
> The superuser can dump a table, and pipe this data everywhere outside of
> the auditing domain. I cannot begin to imagine the kind of security
> restrictions you'd need to audit what happens with data after libpq has
> produced the results. My best guess would be to incorporate some kind of
> separation of duty mechanism; only allow certain superuser operations
> with two people involved.

My views are from working with FDA validated environments, and I don’t really 
understand the above.  It is not db auditing’s job to stop or control the 
access to data or to log what happens to data outside of PostgreSQL.  To audit 
a db superuser is very simple, keep a log of everything a super user does and 
to write that log to a write append, no read filesystem or location.  Since the 
db superuser can do anything there is no value in configuring what to log.  
This should be an option that once set, cannot be changed without reinstalling 
the PostgreSQL binary.  The responsibility for auditing/controlling any binary 
replacement is the operating system’s, not PostgreSQL.  In this environment the 
db superuser will not have authorized root access for the os.

The use case examples, that I am familiar with, are that procedural policies 
control what the db superuser can do.  If the policy says that the db superuser 
cannot dump a table and pipe this data someplace without being supervised by a 
third party auditor (building on the above), then what you want in the log is 
that the data was dumped by whom with a date and time.  Thats it.  Its up to 
policies, audit review, management, and third party audit tools, to pick up the 
violation.  Auditing’s job is to keep a complete report, not prevent.  
Prevention is the role of security.

Neil



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-packagers] Palle Girgensohn's ICU patch

2014-11-26 Thread Neil Tiffin

On Nov 26, 2014, at 8:21 AM, Greg Stark  wrote:

> I find it hard to believe the original premise of this thread. We knew
> there were some problems with OSX and FreeBSD but surely they can't be
> completely broken?

Ever tried to use Spotlight for searching (English) on the Mac, not completely 
broken, just not reliable.  This does not surprise me in the least for OSX.  
The Mac has, in recent history, become a “looks good", but the details may or 
may not be really correct platform.

I thought FreeBSD was a preferred OS for PostgreSQL?  This does surprise me.

> What happens if you run "ls" with your locale set
> to something like fr_FR.UTF8 ? Does Apple not sell Macs in countries
> other than the US?

Neil
Daily Mac user for a long time.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/pgSQL 2

2014-09-02 Thread Neil Tiffin

On Sep 1, 2014, at 10:24 PM, Craig Ringer  wrote:

> On 09/02/2014 08:09 AM, Neil Tiffin wrote:
>> Now I could use other languages as was suggested upstream.  Lets see, I use 
>> R all the time, but R is not a first class language, not in core, and its 
>> slow. Python 3 would be acceptable to me, but its untrusted. tcl I don’t 
>> know and don’t want to learn as no one else seems to use it (in my world 
>> anyway).  perl is the only possibility left and again, no one in my world is 
>> using Perl and it’s not clear if there is a performance penalty.  The docs 
>> say the best language for performance is PL/pgSQL after pure SQL.
> 
> PL/Perl is plenty fast, FWIW.
> 

Good to know.  I used to do a lot of perl and will revisit the language.

> I agree that it is unfortunate that we don't have an in-core trusted
> "real language" PL other than PL/Perl. I am personally hoping that PL/V8
> will be in a position to be adopted as "PL/JavaScript" soon, as that
> would be an excellent fit with how the language fashion world is
> currently moving - JSON and JavaScript abound.
> 
> More seriously, JavaScript is also a good fit for a trusted PL. I've
> long favoured Lua because of the excellent embeddable runtime and
> security-friendly design, but it's never really got the uptake required
> to make it a serious contender.
> 
> I'd be quite happy to see PL/JavaScript in-core.
> 
> (The other obvious candidate would be PL/Ruby, but it doesn't have an
> untrusted variant, and AFAIK Ruby is no better than Python when it comes
> to supporting a secure runtime: hopeless.)
> 
>> That should be enough alone to suggest postgreSQL start working on a modern, 
>> in core, fast, fully supported language.
> 
> I couldn't disagree more.
> 
> If we were to implement anything, it'd be PL/PSM
> (http://en.wikipedia.org/wiki/SQL/PSM). I'm sure it's as bizarre and
> quirky as anything else the SQL committee has brought forth, but it's at
> least a standard(ish) language.

I’d be happy with PL/Javascript, PL/Lua or ?? as long as creating dynamic SQL 
queries was simple, i.e. no goofball 6 or 10 level quotes to make it work.  So 
instead of (from the docs, 40.6.4. Looping Through Query Results)

EXECUTE 'TRUNCATE TABLE ' || quote_ident(mviews.mv_name);
EXECUTE 'INSERT INTO '
   || quote_ident(mviews.mv_name) || ' '
   || mviews.mv_query;

should be something like:
EXECUTE ‘TRUNCATE TABLE $$mviews.mv_name’;
EXECUTE ‘INSERT INTO $$mviews.mv_name $$mviews.mv_query’;

Wow, so after I wrote the above, I went back to review the docs and lo and 
behold the format function was added in 9.1 (I think).

It turns out it can already be written as (not tested)
EXECUTE format( ‘TRUNCATE TABLE %I’, mviews.mv_name);
EXECUTE format( ‘INSERT INTO %I %L’, mviews.mv_name, mviews.mv_query);

That’s not so bad and very similar to how it would have to be done in many 
other languages.  However the first three examples in the docs for PL/pgSQL for 
dynamic queries and many, many other places don’t show this approach.  And the 
format syntax is only listed 4 lines from the bottom of the section as a ‘you 
can also do this’.  From the position and wording I would interpret that 
something must be wrong with using the format function to construct dynamic 
queries, but, who knew, I never scrolled down that far in the docs.  Thank you 
to whomever added the format() function.

So what’s wrong with using format() for dynamic queries and why is the approach 
not more prominent or recommended?  And the format function option is not even 
listed in the section on quoting (40.11.1. Handling of Quotation Marks)

Neil



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/pgSQL 2

2014-09-01 Thread Neil Tiffin

On Sep 1, 2014, at 10:45 AM, Tom Lane  wrote:

> What is actually being proposed, AFAICS, is a one-shot fix for a bunch
> of unfortunate choices.  That might be worth doing, but let's not fool
> ourselves about whether it’s one-shot or not.

Well, one shot every 18 years is not so bad.

I am only a casual user and as such probably do not merit much consideration 
from the experts here.  I only work with PL/pgSQL occasionally, but never go 
weeks without doing it and sometimes it is all I do for weeks.  

That said and this being the internet, IMO working in PL/pgSQL is only slightly 
better than stabbing myself in the leg with a knife compared to other 
non-PL/pgSQL languages I work in.  Mostly my hate is directed at the silly 
quoting.  But it has lots of other quirks that are not all that obvious unless 
you work with it all day every day.

Now I could use other languages as was suggested upstream.  Lets see, I use R 
all the time, but R is not a first class language, not in core, and its slow. 
Python 3 would be acceptable to me, but its untrusted. tcl I don’t know and 
don’t want to learn as no one else seems to use it (in my world anyway).  perl 
is the only possibility left and again, no one in my world is using Perl and 
it’s not clear if there is a performance penalty.  The docs say the best 
language for performance is PL/pgSQL after pure SQL.

Really, this is from the docs

a_output := a_output || '' if v_'' ||
referrer_keys.kind || '' like ''
|| referrer_keys.key_string || ''
then return ''  || referrer_keys.referrer_type
|| ‘'; end if;'';

That should be enough alone to suggest postgreSQL start working on a modern, in 
core, fast, fully supported language.  Of course PL/pgSQL works, but so did 
one-line 5k perl programs that nobody likes today.  Everything can be done in 
assembler, but no one suggests that today.  Today, it is all about programmer 
productivity.  PL/pgSQL has a lot of unnecessary stuff that sucks the life out 
of programmer productivity.  And this should be very much a concern of the 
professionals that support PostgreSQL

For example:

DECLARE
declarations 
BEGIN
statements
END

This looks a lot like COBOL or Pascal, and today is mostly unnecessary.  

So my opinion would be to keep PL/pgSQL and adopt a new PL language that is 
fully supported, fast, and modern, that will over time replace PL/pgSQL.

Neil






-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgaudit - an auditing extension for PostgreSQL

2014-05-06 Thread Neil Tiffin

On May 4, 2014, at 5:27 PM, Stephen Frost  wrote:

> * Neil Tiffin (ne...@neiltiffin.com) wrote:
>> On May 4, 2014, at 3:17 PM, Stephen Frost  wrote:
>>> Any system where there exists a role similar to 'superuser' in the PG
>>> sense (a user who is equivilant to the Unix UID under which the rest of
>>> the system is run) would be hard-pressed to provide a solution to this
>>> issue.
>> 
>> Not sure I understand which issue you are referring to.  If you are 
>> referring to 'cannot be turned off', I would think a reasonable first pass 
>> would be to handle it similar to '--data-checksums' in 'initdb'.  For 
>> example, "This option can only be set during initialization, and cannot be 
>> changed later. If set, basic auditing is on for all objects, in all 
>> databases."
> 
> Well, except that a superuser *could* effectively turn off checksums by
> changing the the control file and doing a restart (perhaps modulo some
> other hacking; I've not tried).  That kind of trivial 'hole' isn't
> acceptable from a security standpoint though and given that we couldn't
> prevent a superuser from doing an LD_PRELOAD and overriding any system
> call we make from the backend, it's kind of hard to see how we could
> plug such a hole.
> 

Ah, I thought it would be more difficult than that for checksums, but 
PostgreSQL does not have to prevent hacking in my experience, that is the 
responsibility of other systems and procedures.  If the core code was such that 
once on, formal logging could not be turned off with any changes to config 
files, settings, or SQL then in my experience that would suffice.  

>>> With SELinux it may be possible and I'd love to see an example
>>> from someone who feels they've accomplished it.  That said, if we can
>>> reduce the need for a 'superuser' role sufficiently by having the
>>> auditing able to be managed independently, then we may have reached the
>>> level of "considerable headache".
>>> 
>>> As many have pointed out previously, there is a certain amount of risk
>>> associated with running without *any* superuser role in the system
>> 
>> If all of the superuser's actions are logged and it's not possible to turn 
>> off the logging (without considerable headache) then it may not matter what 
>> the superuser can do.  If the superuser makes changes and they are logged 
>> then the auditors have sufficient information to see if the correct 
>> procedures were followed.  Validated systems are based on tracking, not 
>> necessarily prohibiting. Select individuals that should be able to be 
>> trusted (which should apply to superusers) should be able to perform the 
>> actions necessary to support the organization.
> 
> Fair enough- the question is just a matter of what exactly that level of
> "headache" is.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgaudit - an auditing extension for PostgreSQL

2014-05-04 Thread Neil Tiffin

On May 4, 2014, at 3:17 PM, Stephen Frost  wrote:

> Neil,
> 
> Thanks for sharing- sounds very similar to what I've heard also.  Your
> input and experience with this is very much sought and appreciated-
> please continue to help us understand, so we're able to provide
> something concrete and useful.  Further comments inline.
> 
> * Neil Tiffin (ne...@neiltiffin.com) wrote:
>> In considering how this might apply to PostgreSQL, it seems that once formal 
>> auditing is turned on, basic non-changeable level of audit reporting should 
>> be in place (i.e. log all create/drop/rename tables/columns/roles and log 
>> all superuser/audit role actions) and this basic audit reporting cannot be 
>> turned off or have the destination changed without considerable headache 
>> (something like init'ing the database?).  Then data monitoring auditing 
>> rules can be added/changed/removed as necessary within the authorization 
>> framework.  Formal auditing might also require other functionality like 
>> checksums.
> 
> Any system where there exists a role similar to 'superuser' in the PG
> sense (a user who is equivilant to the Unix UID under which the rest of
> the system is run) would be hard-pressed to provide a solution to this
> issue.

Not sure I understand which issue you are referring to.  If you are referring 
to 'cannot be turned off', I would think a reasonable first pass would be to 
handle it similar to '--data-checksums' in 'initdb'.  For example, "This option 
can only be set during initialization, and cannot be changed later. If set, 
basic auditing is on for all objects, in all databases."

>  With SELinux it may be possible and I'd love to see an example
> from someone who feels they've accomplished it.  That said, if we can
> reduce the need for a 'superuser' role sufficiently by having the
> auditing able to be managed independently, then we may have reached the
> level of "considerable headache".
> 
> As many have pointed out previously, there is a certain amount of risk
> associated with running without *any* superuser role in the system

If all of the superuser's actions are logged and it's not possible to turn off 
the logging (without considerable headache) then it may not matter what the 
superuser can do.  If the superuser makes changes and they are logged then the 
auditors have sufficient information to see if the correct procedures were 
followed.  Validated systems are based on tracking, not necessarily 
prohibiting. Select individuals that should be able to be trusted (which should 
apply to superusers) should be able to perform the actions necessary to support 
the organization.

> (though it's certainly possible to do so), as it becomes much more
> difficult to do certain kinds of analysis and forensics associated with
> trying to recover a corrupt system.  Still, that risk may very well be
> acceptable in some environments.  I'd certainly like to see us get to a
> point where a superuser role isn't absolutely required once the system
> is up and running.
> 
>> Until these or similar requirements (for formal auditing) are in core, it 
>> makes no sense (to me) to not allow the superuser to manage auditing because 
>> any conformance requirements have to be procedure based, not system based.  
>> People often forget that procedure/people based audit conformance worked 
>> just fine before computers existed.
> 
> I do understand this and I expect we will always allow the roles which
> are 'superuser' to modify these procedures, but we'll get to a point
> where such a role doesn't have to exist (or it's a considerable headache
> to get one into place) and that'll get us to the point which is required
> to check the "formal auditing" box for the organizations which are
> interested and willing to accept those trade-offs.
> 





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgaudit - an auditing extension for PostgreSQL

2014-05-04 Thread Neil Tiffin

On May 4, 2014, at 10:12 AM, Tom Lane  wrote:
> Stephen Frost  writes:
>> * Abhijit Menon-Sen (a...@2ndquadrant.com) wrote:
>>> 1. I wish it were possible to prevent even the superuser from disabling
>>> audit logging once it's enabled, so that if someone gained superuser
>>> access without authorisation, their actions would still be logged.
>>> But I don't think there's any way to do this.
> 
>> Their actions should be logged up until they disable auditing and
>> hopefully those logs would be sent somewhere that they're unable to
>> destroy (eg: syslog).  Of course, we make that difficult by not
>> supporting log targets based on criteria (logging EVERYTHING to syslog
>> would suck).
> 
>> I don't see a way to fix this, except to minimize the amount of things
>> requiring superuser to reduce the chances of it being compromised, which
>> is something I've been hoping to see happen for a long time.
> 
> Prohibiting actions to the superuser is a fundamentally flawed concept.
> If you do that, you just end up having to invent a new "more super"
> kind of superuser who *can* do whatever it is that needs to be done.


In getting approval for FDA validated systems, IIRC, they wanted to see the 
audit change permissions completely independent of the technical roles and 
responsibilities.  Meaning that superuser or owner roles could not change the 
audit requirements once established and the audit role could not change any 
data or data definitions except add, change or remove auditing rules. 
Everything the auditor role did was logged, no exceptions.

If an owner or superuser dropped a table the auditors were completely fine with 
a log entry that the table/column was dropped or created by someone.  The audit 
reporting system (external to the database) had notifications for these types 
of events.  For example, by procedure these changes should have been done in 
conjunction with the auditors and the initial audit requirements should already 
have been improved by the auditors when the column/table was added back.  
Dropping a table/column without getting approval ahead of time was a procedure 
violation that could result in termination. Of course, there were a lot more 
details.

By monitoring creation/delete DDL events along with non changeable (by 
technical staff) audit rules the auditors were happy that they could manage the 
audit conformance.

And yes, the audit logs had to be written in a way they could not be easily 
tampered with.  At the time we used an approved append only, read only hardware 
file/reporting system.

In considering how this might apply to PostgreSQL, it seems that once formal 
auditing is turned on, basic non-changeable level of audit reporting should be 
in place (i.e. log all create/drop/rename tables/columns/roles and log all 
superuser/audit role actions) and this basic audit reporting cannot be turned 
off or have the destination changed without considerable headache (something 
like init'ing the database?).  Then data monitoring auditing rules can be 
added/changed/removed as necessary within the authorization framework.  Formal 
auditing might also require other functionality like checksums.

Until these or similar requirements (for formal auditing) are in core, it makes 
no sense (to me) to not allow the superuser to manage auditing because any 
conformance requirements have to be procedure based, not system based.  People 
often forget that procedure/people based audit conformance worked just fine 
before computers existed.

Neil



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Deprecating RULES

2012-10-17 Thread Neil Tiffin

On Oct 17, 2012, at 4:45 PM, Daniel Farina wrote:

> On Wed, Oct 17, 2012 at 1:12 PM, Josh Berkus  wrote:
>> On 10/17/12 12:57 PM, Daniel Farina wrote:
>>> I'll have to register my disagreement then, in the special case where
>>> a feature becomes so obscure that many people don't have a wide-spread
>>> intuition at what it's good at or used for.  Tom also said "build the
>>> replacement," and without itemization of use cases, I don't even know
>>> what that would look like -- perhaps such knowledge is assumed, but I
>>> think it's assumed wrongly, so perhaps there just needs to be some
>>> education.  At best you could define what to build somewhat
>>> tautologically from the mechanism used by RULES, and that's not a very
>>> good way to go about it, methinks.
> 
> [use case, redacted, although worth independent consideration]
> 
>> Putting it as "Andrew and Josh need to enumerate these cases, or forever
>> be silent" is quite unfair to our users.  Andrew and I hardly represent
>> the entire scope of PostgreSQL app developers.  Enumerating the cases,
>> finding replacements for them, and documenting migrations needs to be a
>> group effort.
> 
> Unfortunately I myself see little evidence of the vast, vast --
> several nines of vast -- majority of folks using rules, and as I said:
> as a thought experiment, merely one solved bug is worth more to me
> than rules from what I know at this time.  If I had a wealth of user
> pain to draw upon, I would have in opposition to their deprecation.
> But, I don't, so I am cautiously in favor of pipelining a slow
> deprecation, even though I can only be hurt by the process tactically

I am a lurker here, and as such, understand that I have no standing.  But I do 
write internal applications using postgresql and it seems to me that the 
direction forward is clear.  I've just went back and read the 9.2 documentation 
on Rules.  It appears that Rules are a current supported and best solution to 
many problems.  So as previously stated and I think pretty much agreed the docs 
must be changed.  I did not pick up from the docs that there were the problems 
mentioned in the various emails.

With that said, having read each email, there are some politics that do not 
make sense.

Are these the facts?

1. Rules are required in the core.  For example, that is how views are 
implemented.
2. There are some, possibly fringe, use cases where Rules are the best solution.
3. There are many uses of Rules that are fragile, or even broken in 
implementation.

4. There is a desire to make Rules an internal core functionality only.
or
5. There is a desire to eliminate Rules all together.

6. There is new functionality that does not work correctly considering Rules.  
(e.g. Rules code is not updated.)

It would seem to me that with #1 and #2 it is foolish (to me, not understanding 
the politics) to consider deprecation.

The real issue is, "Should Rules be visible to users?"

As an application developer, I do not use Rules because they are non standard 
and my code will be used by different back ends, so personality I have no skin 
in this decision.  But logically, I think that it is silly to consider 
deprecation at this time.  The time to consider deprecation is when no core 
functionality depends on Rules.  Until that time, there is nothing to be gained 
by deprecation and there is no reason to piss off users by deprecation of code 
that has to be maintained anyway.

So I would move the docs to the internal section, state that Rules are not 
recommended to be used in user SQL, and that Rules may be deprecated in the 
future, then leave things alone for a couple of years until the way forward 
becomes clear.  If developers want to deprecate Rules, then create code that 
eliminates Rules from being require for core functions.

It seems to me that eventually Rules will suffer bit rot and it will be clear 
that it is time to remove all traces, or Rules will be maintained (albeit 
possibly less scope) and they will continue as core functionality based on need.

Neil







-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Darwin 1.4 (OS X 10.1) Broken Compile, Snapshot and

2001-10-05 Thread Neil Tiffin

At 9:41 AM -0400 10/5/01, Alex Avriette wrote:
>I had a catastrophic crash on one of our webservers here and took the
>opportunity to upgrade it. Unfortunately, after the upgrade, I am unable to
>compile either 7.1.3 or the current snapshot of postgres.
>
>The error I get is rather opaque:
>
>/usr/bin/ld: -undefined error must be used when -twolevel_namespace is in
>effect
>
>This is in the final stretch of compiling, on libpq. I checked google for
>it, and came up with three header files in an MIT Apple sourcetree. So this
>strikes me as a particularly Darwinish failing. I also ran a recursive grep
>on the postgres source tree and wasnt able to find -twolevel_namespace in
>any of the Makefiles. This makes me think it is something external.
>
>Anyone have an idea as to what is causing this? This box is down until
>postgres comes back up. :-7
>
>alex
>

Alex,

You might check the Fink mailing list, i believe this has been talked 
about but I cant remember which mail list.  But I believe Fink 
installs postgresql just fine on 10.1

http://sourceforge.net/projects/fink

I might have also see the discussion on the Darwin mail list.

http://lists.apple.com/archives/darwin-development/2001/Sep/19.html

has something like:
simply add -flat_namespace to the LDFLAGS.

And later on the same page are the patches to make it work.

Neil Tiffin
Chicago

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] Darwin 1.4 (OS X 10.1) Broken Compile, Snapshot and

2001-10-05 Thread Neil Tiffin

At 9:41 AM -0400 10/5/01, Alex Avriette wrote:
>I had a catastrophic crash on one of our webservers here and took the
>opportunity to upgrade it. Unfortunately, after the upgrade, I am unable to
>compile either 7.1.3 or the current snapshot of postgres.
>
>The error I get is rather opaque:
>
>/usr/bin/ld: -undefined error must be used when -twolevel_namespace is in
>effect
>
>This is in the final stretch of compiling, on libpq. I checked google for
>it, and came up with three header files in an MIT Apple sourcetree. So this
>strikes me as a particularly Darwinish failing. I also ran a recursive grep
>on the postgres source tree and wasnt able to find -twolevel_namespace in
>any of the Makefiles. This makes me think it is something external.
>
>Anyone have an idea as to what is causing this? This box is down until
>postgres comes back up. :-7
>
>alex
>

Alex,

You might check the Fink mailing list, i believe this has been talked 
about but I cant remember which mail list.  But I believe Fink 
installs postgresql just fine on 10.1

http://sourceforge.net/projects/fink

I might have also see the discussion on the Darwin mail list.

http://lists.apple.com/archives/darwin-development/2001/Sep/19.html

has something like:
simply add -flat_namespace to the LDFLAGS.

-- 
Neil
[EMAIL PROTECTED]
GNU Enterprise
http://www.gnuenterprise.org/
http://www.gnuenterprise.org/~neilt/sc.html

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: Proposal: [HACKERS] OID wraparound: summary and proposal

2001-08-07 Thread Neil Tiffin

At 11:22 AM -0400 8/7/01, Tom Lane wrote:
>Neil Tiffin <[EMAIL PROTECTED]> writes:
>>  I have not even considered multiple database servers running
>>  different database, which is our design goal.  In this case we would
>>  like to have a slimmed down (and blazingly fast) PostgreSQL server in
>>  which we manage the uid in our middleware.  This is because the uid
>>  must be unique accross all servers and database vendors.
>
>Given those requirements, it seems like your UID *must* be an
>application-defined column; there's no way you'll get a bunch of
>different database vendors to all sign on to your approach to UIDs.
>
>So in reality, I think the feature you want is precisely to be able
>to suppress Postgres' automatic OID generation on your table(s), since
>it's of no value to you.  The number of cycles saved per insert isn't
>going to be all that large, but they'll add up...

That sounds about right.  Its amazing how having to write this stuff 
down clarifies ones thoughts.

-- 
Neil
[EMAIL PROTECTED]
GNU Enterprise
http://www.gnuenterprise.org/
http://www.gnuenterprise.org/~neilt/sc.html

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: Proposal: [HACKERS] OID wraparound: summary and proposal

2001-08-07 Thread Neil Tiffin

At 10:09 AM +0500 8/7/01, Hannu Krosing wrote:
>Neil Tiffin wrote:
>>
>>  I would just like to comment that for our project, GNU Enterprise, we
>>  use our own 128 bit object ID that is unique (UUID) for every row in
>>  all tables.
>>
>>  It seems to me, without having looked into it, that having both a
>>  PostgreSQL UID  and our own 128 bit objectid (UUID) is redundant and
>>  slows the whole process down.  But we are storing object data in the
>>  database and require and absolutely unique objectid.  We are planning
>>  for enterprise usage and expect to need 128 bits to uniquely define
>>  our objects.
>
>Is it just an 128-bit int from a sequence or does it have some internal
>structure ?
>
>What kind of enterprise do you expect to have more than
>18 446 744 073 709 551 615 of objects that can uniquely be identified
>by 64 bits ?

Our objectid is a UUID from libuuid (provided by e2fsprogs, requires 
development files. debian package uuid-dev provides all necessary 
files.)  We use the text representation which IIRC is 33 characters 
(38 minus the "-") to store it in the database.  (And I dont think 
this is the best way to do it.) As for 64 bits being enough, you may 
just be right.  Our developer that did this part of the code has left 
(and we are taking the opportunity to examine this).

We will eventually compete with SAP, Peoplesoft etc. and consider 
that SAP has about 20,000 tables to represent an enterprise plus the 
life of the system at 10 years and you start to knock down the number 
very fast.

I think in the short term we could be happy with a 64 bit id.  As we 
don't even have our first application working (but we are within a 
couple of months) and it will be years before we have a system that 
will perform in large scale environments.

In either case the perfect solution, for us, would be to be able to 
configure the PostgreSQL uid as none, 64 bit or 128 bit uid at 
compile time.  A default of 64 bits would be just fine.  But we need 
to have the uid unique for the database or we will still have to 
create and use our own uid (and that will slow everything down).

I have not even considered multiple database servers running 
different database, which is our design goal.  In this case we would 
like to have a slimmed down (and blazingly fast) PostgreSQL server in 
which we manage the uid in our middleware.  This is because the uid 
must be unique accross all servers and database vendors.  (I don't 
claim to be a database guru, so if we are all wet here please feel 
free to help correct our misunderstandings.)

-- 
Neil
[EMAIL PROTECTED]
GNU Enterprise
http://www.gnuenterprise.org/
http://www.gnuenterprise.org/~neilt/sc.html

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: Proposal: [HACKERS] OID wraparound: summary and proposal

2001-08-03 Thread Neil Tiffin

I would just like to comment that for our project, GNU Enterprise, we 
use our own 128 bit object ID that is unique (UUID) for every row in 
all tables.

It seems to me, without having looked into it, that having both a 
PostgreSQL UID  and our own 128 bit objectid (UUID) is redundant and 
slows the whole process down.  But we are storing object data in the 
database and require and absolutely unique objectid.  We are planning 
for enterprise usage and expect to need 128 bits to uniquely define 
our objects.

So I would request strongly that we have an option for a 128 bit 
unique id for all rows in the database and/or that it is configurable 
so we can best decide how to use it.  We would like to use our own 
and have the postgreSQL uid fast and small or have it larger and 
slower but remove the need to generate our own uid.

Neil
[EMAIL PROTECTED]
GNU Enterprise
http://www.gnuenterprise.org/
http://www.gnuenterprise.org/~neilt/sc.html


At 10:17 AM +0200 8/3/01, Zeugswetter Andreas SB wrote:
>  > > At the same time that we announce support for optional OIDs,
>>  > we should announce that, in future releases, OIDs will only be
>>  > guaranteed unique (modulo wraparounds) within a single table.
>
>... if an appropriate unique constraint is explicitly created.
>
>>
>>  Seems reasonable --- that will give people notice that we're thinking
>>  about separate-OID-generator-per-table ideas.
>
>Imho we should think about adding other parts to the external representation
>of OID before we start thinking about moving from 4 to 8 bytes in the heap.
>Essentially the oid would then be a concatenated e.g. 16 byte number,
>that is constructed with:
>
>   oid128 = installation oid<<96 + class oid<<64 + 
>for_future_use<<32 + tuple oid
>
>Imho walking that direction would serve the "OID" idea a lot better,
>and could actually guarantee a globally unique oid, if the "installation
>oid" was centrally managed.
>
>It has the additional advantage of knowing the class by only looking 
>at the oid.
>
>The btree code could be specially tuned to only consider the lower 
>4(or 8) bytes
>on insert and make an early exit for select where oid = wrong class id.


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[HACKERS] Regression failed Mac OSX

2001-04-04 Thread Neil Tiffin

Using current cvs version on Mac OS X 10.0 built with

./configure
make
made check

test horology ... FAILED

from the regression.diffs file

Is this a problem or not?

Neil Tiffin
Chicago


[localhost:src/test/regress] ntiffin% cat regression.diffs
*** ./expected/horology.out Sun Dec  3 08:51:11 2000
--- ./results/horology.out  Wed Apr  4 13:31:02 2001
***
*** 122,128 
   SELECT time with time zone '01:30' + interval '02:01' AS "03:31:00-08";
03:31:00-08
   -
!  03:31:00-08
   (1 row)

   SELECT time with time zone '01:30-08' - interval '02:01' AS "23:29:00-08";
--- 122,128 
   SELECT time with time zone '01:30' + interval '02:01' AS "03:31:00-08";
03:31:00-08
   -
!  03:31:00-07
   (1 row)

   SELECT time with time zone '01:30-08' - interval '02:01' AS "23:29:00-08";
***
*** 140,146 
   SELECT time with time zone '03:30' + interval '1 month 04:01' AS 
"07:31:00-08";
07:31:00-08
   -
!  07:31:00-08
   (1 row)

   SELECT interval '04:30' - time with time zone '01:02' AS "+03:28";
--- 140,146 
   SELECT time with time zone '03:30' + interval '1 month 04:01' AS 
"07:31:00-08";
07:31:00-08
   -
!  07:31:00-07
   (1 row)

   SELECT interval '04:30' - time with time zone '01:02' AS "+03:28";

==


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] Problem Building on Mac OSX

2001-04-04 Thread Neil Tiffin

I am trying to build Mac OSX 10.0 from the current cvs.

./configure --with-perl --with-openssl --enable-syslog |& tee configure.logfile



make |& tee Makefile.logfile



Running Mkbootstrap for plperl ()
chmod 644 plperl.bs
LD_RUN_PATH="" cc -o blib/arch/auto/plperl/plperl.bundle  -bundle 
-undefined suppress plperl.o eloglvl.o SPI.o 
/System/Library/Perl/darwin/auto/Opcode/Opcode.bundle 
-L/System/Library/Perl/darwin/CORE -lperl
/usr/bin/ld: /System/Library/Perl/darwin/auto/Opcode/Opcode.bundle is 
input for the dynamic link editor, is not relocatable by the static 
link editor again
make[4]: *** [blib/arch/auto/plperl/plperl.bundle] Error 1
make[3]: *** [all] Error 2
make[2]: *** [all] Error 2
make[1]: *** [all] Error 2
make: *** [all] Error 2

any ideas?

Neil

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster