Re: Fix a typo in pg_rotate_logfile

2024-02-14 Thread Dave Page
Hi

On Mon, 12 Feb 2024 at 21:31, Bharath Rupireddy <
bharath.rupireddyforpostg...@gmail.com> wrote:

> On Tue, Feb 13, 2024 at 2:29 AM Daniel Gustafsson  wrote:
> >
> > On that note though, we might want to consider just dropping it
> altogether in
> > v17 (while fixing the incorrect hint in backbranches)?  I can't imagine
> > adminpack 1.0 being in heavy use today, and skimming pgAdmin code it
> seems it's
> > only used in pgAdmin3 and not 4. Maybe it's time to simply drop old code?
>
> https://codesearch.debian.net/search?q=pg_logfile_rotate=1
> shows no users for it though. There's pgadmin3 using it
>
> https://github.com/search?q=repo%3Apgadmin-org%2Fpgadmin3%20pg_logfile_rotate=code
> ,
> however the repo is archived. Surprisingly, core has to maintain the
> old code needed for adminpack 1.0 - pg_rotate_logfile_old SQL function
> and pg_rotate_logfile function in signalfuncs.c. These things could
> have been moved to adminpack.c back then and pointed CREATE FUNCTION
> pg_catalog.pg_logfile_rotate() to use it from adminpack.c. If we
> decide to remove adminpack 1.0 version completely, the 1.0 functions
> pg_file_read, pg_file_length and pg_logfile_rotate will also go away
> making adminpack code simpler.
>
> Having said that, it's good to hear from others, preferably from
> pgadmin developers - added Dave Page (dp...@pgadmin.org) in here for
> inputs.
>

As it happens we're currently implementing a redesigned version of that
functionality from pgAdmin III in pgAdmin 4. However, we are not using
adminpack for it.

FWIW, the reason for the weird naming is that originally all the
functionality for reading/managing files was added entirely as the
adminpack extension. It was only later that some of the functionality was
moved into core, and renamed along the way (everyone likes blue for their
bikeshed right?). The old functions (albeit, rewritten to use the new core
functions) were kept in adminpack for backwards compatibility.

That said, pgAdmin III has been out of support for many years, and as far
as I know, it (and similarly old versions of EDB's PEM which was based on
it) were the only consumers of adminpack. I would not be sad to see it
removed entirely - except for the fact that I fondly remember being invited
to join -core immediately after a heated discussion with Tom about it!

-- 
Dave Page
pgAdmin: https://www.pgadmin.org
PostgreSQL: https://www.postgresql.org
EDB: https://www.enterprisedb.com


Re: When to drop src/tools/msvc support

2023-04-11 Thread Dave Page
On Tue, 11 Apr 2023 at 13:52, Jonathan S. Katz  wrote:

> On 4/11/23 7:54 AM, Dave Page wrote:
> >
> >
> > On Tue, 11 Apr 2023 at 11:58, Andrew Dunstan  > <mailto:and...@dunslane.net>> wrote:
> >
> > For meson you just need to to "pip install meson ninja" in your
> > python distro and you should be good to go (they will be installed
> > in python's Scripts directory). Don't use chocolatey to install
> > meson/ninja - I ran into issues doing that.
> >
> > AFAICT meson will use whatever version of VC you have installed,
> > although I have only been testing with VC2019.
> >
> > OK, that sounds easy enough then (famous last words!)
>
> [RMT hat]
>
> Dave -- does this mean you see a way forward on moving the Windows
> builds over to use Meson instead of MSVC?
>

I can see a way forward, yes.


>
> Do you think we'll have enough info by end of this week to make a
> decision on whether we can drop MSVC in v16?
>

There's no way I can test anything this week - I'm on leave for most of it
and AFK.

But, my point was more that there are almost certainly more projects using
the MSVC build system than the EDB installers; pgAdmin being just one
example.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: When to drop src/tools/msvc support

2023-04-11 Thread Dave Page
On Tue, 11 Apr 2023 at 11:58, Andrew Dunstan  wrote:

>
> On 2023-04-11 Tu 04:05, Dave Page wrote:
>
>
>
> On Tue, 11 Apr 2023 at 08:09, Magnus Hagander  wrote:
>
>> On Tue, Apr 11, 2023 at 12:27 AM Andres Freund 
>> wrote:
>> >
>> > Hi,
>> >
>> > On 2023-04-10 19:55:35 +0100, Dave Page wrote:
>> > > Projects other than the EDB installers use the MSVC build system -
>> e.g.
>> > > pgAdmin uses it’s own builds of libpq and other tools (psql, pg_dump
>> etc)
>> > > that are pretty heavily baked into a fully automated build system
>> (even the
>> > > build servers and all their requirements are baked into Ansible).
>> > >
>> > > Changing that lot would be non-trivial, though certainly possible,
>> and I
>> > > suspect we’re not the only ones doing that sort of thing.
>> >
>> > Do you have a link to the code for that, if it's open? Just to get an
>> > impression for how hard it'd be to switch over?
>>
>>
>> The pgadmin docs/readme refers to
>> https://github.com/pgadmin-org/pgadmin4/tree/master/pkg/win32
>>
>> It clearly doesn't have the full automation stuff, but appears to have
>> the parts about building the postgres dependency.
>>
>
> Yeah, that's essentially the manual process, though I haven't tested it in
> a while. The Ansible stuff is not currently public. I suspect (or rather,
> hope) that we can pull in all the additional packages required using
> Chocolatey which shouldn't be too onerous.
>
> Probably my main concern is that the Meson build can use the same version
> of the VC++ compiler that we use (v14), which is carefully matched for
> compatibility with all the various components, just in case anything passes
> CRT pointers around. Python is the one thing we don't build ourselves on
> Windows and the process will build modules like gssapi and psycopg (which
> links with libpq of course), so we're basically following what they use.
>
>
>
> For meson you just need to to "pip install meson ninja" in your python
> distro and you should be good to go (they will be installed in python's
> Scripts directory). Don't use chocolatey to install meson/ninja - I ran
> into issues doing that.
>
> AFAICT meson will use whatever version of VC you have installed, although
> I have only been testing with VC2019.
>
OK, that sounds easy enough then (famous last words!)

Thanks!

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: When to drop src/tools/msvc support

2023-04-11 Thread Dave Page
On Tue, 11 Apr 2023 at 08:09, Magnus Hagander  wrote:

> On Tue, Apr 11, 2023 at 12:27 AM Andres Freund  wrote:
> >
> > Hi,
> >
> > On 2023-04-10 19:55:35 +0100, Dave Page wrote:
> > > Projects other than the EDB installers use the MSVC build system - e.g.
> > > pgAdmin uses it’s own builds of libpq and other tools (psql, pg_dump
> etc)
> > > that are pretty heavily baked into a fully automated build system
> (even the
> > > build servers and all their requirements are baked into Ansible).
> > >
> > > Changing that lot would be non-trivial, though certainly possible, and
> I
> > > suspect we’re not the only ones doing that sort of thing.
> >
> > Do you have a link to the code for that, if it's open? Just to get an
> > impression for how hard it'd be to switch over?
>
>
> The pgadmin docs/readme refers to
> https://github.com/pgadmin-org/pgadmin4/tree/master/pkg/win32
>
> It clearly doesn't have the full automation stuff, but appears to have
> the parts about building the postgres dependency.
>

Yeah, that's essentially the manual process, though I haven't tested it in
a while. The Ansible stuff is not currently public. I suspect (or rather,
hope) that we can pull in all the additional packages required using
Chocolatey which shouldn't be too onerous.

Probably my main concern is that the Meson build can use the same version
of the VC++ compiler that we use (v14), which is carefully matched for
compatibility with all the various components, just in case anything passes
CRT pointers around. Python is the one thing we don't build ourselves on
Windows and the process will build modules like gssapi and psycopg (which
links with libpq of course), so we're basically following what they use.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: When to drop src/tools/msvc support

2023-04-10 Thread Dave Page
On Mon, 10 Apr 2023 at 18:34, Robert Haas  wrote:

> On Mon, Apr 10, 2023 at 12:56 PM Tom Lane  wrote:
> > Robert Haas  writes:
> > > However, if this is the direction we're going, we probably need to
> > > give pgsql-packagers a heads up ASAP, because anybody who is still
> > > relying on the MSVC system to build Windows binaries is presumably
> > > going to need some time to adjust. If we rip out the build system
> > > somebody is using a couple of weeks before beta, that might make it
> > > difficult for that person to get the beta out promptly. And I think
> > > there's probably more than just EDB who would be in that situation.
> >
> > Oh ... that's a good point.  Is there anyone besides EDB shipping
> > MSVC-built executables?  Would it even be practical to switch to
> > meson with a month-or-so notice?  Seems kind of tight, and it's
> > not like the packagers volunteered to make this switch.
>
> I can't really speak to those questions with confidence.
>
> Perhaps instead of telling pgsql-packagers what we're doing, we could
> instead ask them if it would work for them if we did XYZ. Then we
> could use that information to inform our decision-making.


Projects other than the EDB installers use the MSVC build system - e.g.
pgAdmin uses it’s own builds of libpq and other tools (psql, pg_dump etc)
that are pretty heavily baked into a fully automated build system (even the
build servers and all their requirements are baked into Ansible).

Changing that lot would be non-trivial, though certainly possible, and I
suspect we’re not the only ones doing that sort of thing.

-- 
-- 
Dave Page
https://pgsnake.blogspot.com

EDB Postgres
https://www.enterprisedb.com


Re: Remove 'htmlhelp' documentat format (was meson documentation build open issues)

2023-03-28 Thread Dave Page
On Tue, 28 Mar 2023 at 10:46, Peter Eisentraut <
peter.eisentr...@enterprisedb.com> wrote:

> On 24.03.23 17:58, Andres Freund wrote:
> > On 2023-03-24 11:59:23 +0100, Peter Eisentraut wrote:
> >> Another option here is to remove support for htmlhelp.
> >
> > That might actually be the best path - it certainly doesn't look like
> anybody
> > has been actively using it. Or otherwise somebody would have complained
> about
> > there not being any instructions on how to actually compile a .chm file.
> And
> > perhaps complained that it takes next to forever to build.
> >
> > I also have the impression that people don't use the .chm stuff much
> anymore,
> > but that might just be me not using windows.
>
> I think in ancient times, pgadmin used it for its internal help.
>

Yes, very ancient :-). We use Sphinx now.


>
> But I have heard less about htmlhelp over the years than about the info
> format.
>
>
>

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Tracking last scan time

2022-11-08 Thread Dave Page
On Tue, 8 Nov 2022 at 04:10, Michael Paquier  wrote:

> On Mon, Nov 07, 2022 at 04:54:07PM +0900, Michael Paquier wrote:
> > FWIW, all the other areas of pgstatfuncs.c manipulate timestamptz
> > fields with a style like the attached.  That's a nit, still per the
> > role of consistency with the surroundings..
> >
> > Anyway, it seems to me that a regression test is in order before a
> > scan happens just after the relation creation, and the same problem
> > shows up with last_idx_scan.
>
> Hearing nothing, done this way as of d7744d5.  Thanks for the report,
> Robert.  And thanks for the patch, Dave.
>

Thank you!

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Postgres auto vacuum - Disable

2022-11-07 Thread Dave Page
Hi

On Mon, 7 Nov 2022 at 11:42, Karthik Jagadish (kjagadis) 
wrote:

> Hi,
>
>
>
> We have a NMS application in cisco and using postgres as a database.
>
>
>
> We have query related to disabling auto vacuum. We have below
> configuration in postgres.conf where the autovacuum=on is commented out.
>
>
>
> [image: Shape Description automatically generated]
>
>
>
> But when checked in database we notice that it’s showing as on
>
>
>
> [image: Graphical user interface, timeline Description automatically
> generated]
>
>
>
> What would this mean? Does it mean that autovacuum is not disabled?
> Appreciate a response.
>

Right. The default is for it to be enabled, so commenting out the option
does nothing. You would need to set it explicitly to off.

BUT... you almost certainly don't want to do that. Cases where it should be
disabled are *extremely* rare. Make sure you *really* know what you're
letting yourself in for by disabling autovacuum, and don't rely on 10+ year
old performance tuning advice from random places on the internet, if that's
what you're doing.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Tracking last scan time

2022-11-03 Thread Dave Page
On Mon, 31 Oct 2022 at 07:36, Dave Page  wrote:

> FYI, this is not intentional, and I do plan to look into it, however I've
> been somewhat busy with pgconfeu, and am travelling for the rest of this
> week as well.
>

Here's a patch to fix this issue. Many thanks to Peter Eisentraut who
figured it out in a few minutes after I spent far too long looking down
rabbit holes in entirely the wrong place.

Thanks for the bug report.


>
> On Sun, 23 Oct 2022 at 21:09, Robert Treat  wrote:
>
>> On Fri, Oct 14, 2022 at 2:55 PM Dave Page  wrote:
>> > On Fri, 14 Oct 2022 at 19:16, Andres Freund  wrote:
>> >> On 2022-10-13 14:38:06 +0100, Dave Page wrote:
>> >> > Thanks for that. It looks good to me, bar one comment (repeated 3
>> times in
>> >> > the sql and expected files):
>> >> >
>> >> > fetch timestamps from before the next test
>> >> >
>> >> > "from " should be removed.
>> >>
>> >> I was trying to say something with that from, but clearly it wasn't
>> >> understandable :). Removed.
>> >>
>> >> With that I pushed the changes and marked the CF entry as committed.
>> >
>> >
>> > Thanks!
>> >
>>
>> Hey folks,
>>
>> I was looking at this a bit further (great addition btw) and noticed
>> the following behavior (this is a mre of the original testing that
>> uncovered this):
>>
>> pagila=# select * from pg_stat_user_tables ;
>>  relid | schemaname | relname | seq_scan | last_seq_scan |
>> seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |
>> n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |
>> n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |
>> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |
>> autovacuum_count | analyze_count | autoanalyze_count
>>
>> ---++-+--+---+--+--+---+---+---+---+---+---+++-++-+-+--+--+--+--+---+---
>> (0 rows)
>>
>> pagila=# create table x (xx int);
>> CREATE TABLE
>> Time: 2.145 ms
>> pagila=# select * from pg_stat_user_tables ;
>>  relid | schemaname | relname | seq_scan | last_seq_scan |
>> seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |
>> n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |
>> n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |
>> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |
>> autovacuum_count | analyze_count | autoanalyze_count
>>
>> ---++-+--+---+--+--+---+---+---+---+---+---+++-++-+-+--+--+--+--+---+---
>>  16392 | public | x   |0 | [null]|
>> 0 |   [null] | [null]|[null] | 0 | 0 |
>> 0 | 0 |  0 |  0 |
>>  0 |  0 | [null]  | [null]  | [null]
>> | [null]   |0 |0 | 0 |
>> 0
>> (1 row)
>>
>> pagila=# insert into x select 1;
>> INSERT 0 1
>> pagila=# select * from pg_stat_user_tables ;
>>  relid | schemaname | relname | seq_scan | last_seq_scan  |
>> seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |
>> n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |
>> n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |
>> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |
>> autovacuum_count | analyze_count | autoanalyze_count
>>
>> ---++-+--++--+--+---+---+---+---+---+---+++-++-+-+--+--+--+--+---+---
>>  16392 | public | x   |0 | 1999-12-31 19:00:00-05 |
>> 0 |   [null] | [null]|[null] | 1 |
>> 0 | 0 |  

Re: Tracking last scan time

2022-10-31 Thread Dave Page
FYI, this is not intentional, and I do plan to look into it, however I've
been somewhat busy with pgconfeu, and am travelling for the rest of this
week as well.

On Sun, 23 Oct 2022 at 21:09, Robert Treat  wrote:

> On Fri, Oct 14, 2022 at 2:55 PM Dave Page  wrote:
> > On Fri, 14 Oct 2022 at 19:16, Andres Freund  wrote:
> >> On 2022-10-13 14:38:06 +0100, Dave Page wrote:
> >> > Thanks for that. It looks good to me, bar one comment (repeated 3
> times in
> >> > the sql and expected files):
> >> >
> >> > fetch timestamps from before the next test
> >> >
> >> > "from " should be removed.
> >>
> >> I was trying to say something with that from, but clearly it wasn't
> >> understandable :). Removed.
> >>
> >> With that I pushed the changes and marked the CF entry as committed.
> >
> >
> > Thanks!
> >
>
> Hey folks,
>
> I was looking at this a bit further (great addition btw) and noticed
> the following behavior (this is a mre of the original testing that
> uncovered this):
>
> pagila=# select * from pg_stat_user_tables ;
>  relid | schemaname | relname | seq_scan | last_seq_scan |
> seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |
> n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |
> n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |
> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |
> autovacuum_count | analyze_count | autoanalyze_count
>
> ---++-+--+---+--+--+---+---+---+---+---+---+++-++-+-+--+--+--+--+---+---
> (0 rows)
>
> pagila=# create table x (xx int);
> CREATE TABLE
> Time: 2.145 ms
> pagila=# select * from pg_stat_user_tables ;
>  relid | schemaname | relname | seq_scan | last_seq_scan |
> seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |
> n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |
> n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |
> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |
> autovacuum_count | analyze_count | autoanalyze_count
>
> ---++-+--+---+--+--+---+---+---+---+---+---+++-++-+-+--+--+--+--+---+---
>  16392 | public | x   |0 | [null]|
> 0 |   [null] | [null]|[null] | 0 | 0 |
> 0 | 0 |  0 |  0 |
>  0 |  0 | [null]  | [null]  | [null]
> | [null]   |0 |0 | 0 |
> 0
> (1 row)
>
> pagila=# insert into x select 1;
> INSERT 0 1
> pagila=# select * from pg_stat_user_tables ;
>  relid | schemaname | relname | seq_scan | last_seq_scan  |
> seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |
> n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |
> n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |
> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |
> autovacuum_count | analyze_count | autoanalyze_count
>
> ---++-+--++--+--+---+---+---+---+---+---+++-++-+-+--+--+--+--+---+---
>  16392 | public | x   |0 | 1999-12-31 19:00:00-05 |
> 0 |   [null] | [null]|[null] | 1 |
> 0 | 0 | 0 |  1 |  0 |
>  1 |  1 | [null]  | [null]  |
> [null]   | [null]   |0 |0 |
>  0 | 0
> (1 row)
>
> Normally we populate "last" columns with a NULL value when the
> corresponding marker is zero, which seems correct in the first query,
> but no longer matches in the second. I can see an argument that this
> is a necessary exception to that rule (I'm not sure I agree with it,
> but I see it) but even in that scenario,

Re: Tracking last scan time

2022-10-14 Thread Dave Page
On Fri, 14 Oct 2022 at 19:16, Andres Freund  wrote:

> Hi,
>
> On 2022-10-13 14:38:06 +0100, Dave Page wrote:
> > Thanks for that. It looks good to me, bar one comment (repeated 3 times
> in
> > the sql and expected files):
> >
> > fetch timestamps from before the next test
> >
> > "from " should be removed.
>
> I was trying to say something with that from, but clearly it wasn't
> understandable :). Removed.
>
> With that I pushed the changes and marked the CF entry as committed.


Thanks!


> --
-- 
Dave Page
https://pgsnake.blogspot.com

EDB Postgres
https://www.enterprisedb.com


Re: Tracking last scan time

2022-10-13 Thread Dave Page
Hi

On Wed, 12 Oct 2022 at 23:52, Andres Freund  wrote:

> Hi,
>
> On 2022-10-12 12:50:31 -0700, Andres Freund wrote:
> > I think this should have at a basic test in
> src/test/regress/sql/stats.sql. If
> > I can write one in a few minutes I'll go for that, otherwise will reply
> > detailing difficulties.
>
> Took a bit longer (+lunch). Attached.
>
>
> In the attached 0001, the patch to make
> GetCurrentTransactionStopTimestamp()
> set xactStopTimestamp, I added a few comment updates and an Assert() to
> ensure
> that CurrentTransactionState->state is
> TRANS_(DEFAULT|COMMIT|ABORT|PREPARE). I
> am worried that otherwise we might end up with someone ending up using it
> in a
> place before the end of the transaction, which'd then end up recording the
> wrong timestamp in the commit/abort record.
>
>
> For 0002, the commit adding lastscan, I added catversion/stats version
> bumps
> (because I was planning to commit it already...), a commit message, and
> that
> minor docs change mentioned earlier.
>
>
> 0003 adds the tests mentioned above. I plan to merge them with 0002, but
> left
> them separate for easier review for now.
>
> To be able to compare timestamps for > not just >= we need to make sure
> that
> two subsequent timestamps differ. The attached achieves this by sleeping
> for
> 100ms between those points - we do that in other places already. I'd
> started
> out with 10ms, which I am fairly sure would suffice, but then deciced to
> copy
> the existing 100ms sleeps.
>
> I verified tests pass under valgrind, debug_discard_caches and after I make
> pgstat_report_stat() only flush when force is passed in.
>

Thanks for that. It looks good to me, bar one comment (repeated 3 times in
the sql and expected files):

fetch timestamps from before the next test

"from " should be removed.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Tracking last scan time

2022-10-12 Thread Dave Page
On Wed, 12 Oct 2022 at 07:40, Michael Paquier  wrote:

> On Mon, Oct 03, 2022 at 12:55:40PM +0100, Dave Page wrote:
> > Thanks. It's just the changes in xact.c, so it doesn't seem like it would
> > cause you any more work either way, in which case, I'll leave it to you
> :-)
>
> Okay, I have just moved the patch to the next CF then, still marked as
> ready for committer.  Are you planning to look at that?
>

Thanks. Was the question directed at me or Andres?

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Tracking last scan time

2022-10-03 Thread Dave Page
Hi

On Fri, 30 Sept 2022 at 18:58, Andres Freund  wrote:

> Hi,
>
> On 2022-09-30 17:58:31 +0200, Vik Fearing wrote:
> > On 9/7/22 12:03, Dave Page wrote:
> > > Here's a v4 patch. This reverts to using
> > > GetCurrentTransactionStopTimestamp() for the last_scan times, and will
> > > set xactStopTimestamp the first time
> GetCurrentTransactionStopTimestamp()
> > > is called, thus avoiding multiple gettimeofday() calls.
> > > SetCurrentTransactionStopTimestamp() is removed, as is use
> > > of xactStopTimestamp (except when resetting it to 0).
> >
> > This patch looks good to me and has much saner behavior than what it
> > replaces.
>
> I agree. However, it seems like a significant enough behavioural change
> that
> I'd rather commit it as a separate patch.  I agree with Vik's judgement
> that
> the patch otherwise is otherwise ready. Happy to do that split myself, or
> you
> can do it...
>

Thanks. It's just the changes in xact.c, so it doesn't seem like it would
cause you any more work either way, in which case, I'll leave it to you :-)

FYI, the OID I chose was simply the closest single value to those used for
the other related functions (e.g. pg_stat_get_numscans). Seemed like a good
way to use up one more random unused value, but I don't care if it gets
changed to the 8000+ range.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Tracking last scan time

2022-09-07 Thread Dave Page
Hi

On Tue, 6 Sept 2022 at 16:53, Andres Freund  wrote:

> Hi,
>
> On 2022-09-06 14:15:56 +0100, Dave Page wrote:
> > Vik and I looked at this a little, and found that we actually don't have
> > generally have GetCurrentTransactionStopTimestamp() at this point - a
> > simple 'select * from pg_class' will result in 9 passes of this code,
> none
> > of which have xactStopTimestamp != 0.
>
> Huh, pgstat_report_stat() used GetCurrentTransactionStopTimestamp() has
> used
> for a long time. Wonder when that was broken. Looks like it's set only
> when a
> xid is assigned. We should fix this.
>
>
> > After discussing it a little, we came to the conclusion that for the
> stated
> > use case, xactStartTimestamp is actually accurate enough, provided that
> we
> > only ever update it with a newer value. It would only likely be in
> extreme
> > edge-cases where the difference between start and end transaction time
> > would have any bearing on whether or not one might drop a table/index for
> > lack of use.
>
> I don't at all agree with this. Since we already use
> GetCurrentTransactionStopTimestamp() in this path we should fix it.
>

I just spent some time looking at this, and as far as I can see, we only
set xactStopTimestamp if the transaction needs to be WAL logged (and in
those cases, it is set before the stats callback runs). As you note though,
we are already calling GetCurrentTransactionStopTimestamp() in the
read-only case anyway, and thus already incurring the cost of
gettimeofday().

Here's a v4 patch. This reverts to using
GetCurrentTransactionStopTimestamp() for the last_scan times, and will
set xactStopTimestamp the first time GetCurrentTransactionStopTimestamp()
is called, thus avoiding multiple gettimeofday() calls.
SetCurrentTransactionStopTimestamp() is removed, as is use
of xactStopTimestamp (except when resetting it to 0).

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


last_scan_v4.diff
Description: Binary data


Re: Tracking last scan time

2022-09-06 Thread Dave Page
Hi

On Thu, 1 Sept 2022 at 19:35, Andres Freund  wrote:

> Hi,
>
> On 2022-09-01 14:18:42 +0200, Matthias van de Meent wrote:
> > On Wed, 31 Aug 2022 at 20:56, Andres Freund  wrote:
> > > But given this is done when stats are flushed, which only happens
> after the
> > > transaction ended, we can just use
> GetCurrentTransactionStopTimestamp() - if
> > > we got to flushing the transaction stats we'll already have computed
> that.
> >
> > I'm not entirely happy with that, as that would still add function
> > call overhead, and potentially still call GetCurrentTimestamp() in
> > this somewhat hot loop.
>
> We already used GetCurrentTransactionStopTimestamp() (as you reference
> below)
> before we get to this point, so I doubt that we'll ever call
> GetCurrentTimestamp(). And it's hard to imagine that the function call
> overhead of GetCurrentTransactionStopTimestamp() matters compared to
> acquiring
> locks etc.


Vik and I looked at this a little, and found that we actually don't have
generally have GetCurrentTransactionStopTimestamp() at this point - a
simple 'select * from pg_class' will result in 9 passes of this code, none
of which have xactStopTimestamp != 0.

After discussing it a little, we came to the conclusion that for the stated
use case, xactStartTimestamp is actually accurate enough, provided that we
only ever update it with a newer value. It would only likely be in extreme
edge-cases where the difference between start and end transaction time
would have any bearing on whether or not one might drop a table/index for
lack of use.

Doing it this way also means we no longer need the GUC to enable the
feature, which as Bruce notes, is likely to lose 95% of users.

Updated patch attached:

- GUC removed.
- The timestamp recorded is xactStartTimestamp.
- Docs updated to make it clear we're recording transaction start time.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


last_scan_v3.diff
Description: Binary data


Re: Tracking last scan time

2022-09-01 Thread Dave Page
On Thu, 1 Sept 2022 at 13:04, Bruce Momjian  wrote:

> On Thu, Sep  1, 2022 at 09:46:59AM +0100, Dave Page wrote:
> > On Wed, 31 Aug 2022 at 17:13, Bruce Momjian  wrote:
> > Wow.  I was just thinking you need second-level accuracy, which must
> be
> > cheap somewhere.
> >
> >
> > Second-level accuracy would indeed be fine for this. Frankly, for my use
> case
> > just the date would be enough, but I can imagine people wanting greater
> > accuracy than that.
> >
> > And yes, I was very surprised by the timing results I got as well. I
> guess it's
> > a quirk of macOS - on a Linux box I get ~4s for gettimeofday() and ~1s
> for time
> > ().
>
> i think we lose 95% of our users if we require it to be enabled so let's
> work to find a way it can be always enabled.
>

So based on Andres' suggestion, something like this seems like it might
work:

if (pgstat_track_scan_timestamps && lstats->t_counts.t_numscans)
{
TimestampTz t = GetCurrentTransactionStopTimestamp();
if (t > tabentry->lastscan)
tabentry->lastscan = t;
}

If that seems like a good option, I can run some more benchmarks (and then
remove the GUC if it looks good).

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Tracking last scan time

2022-09-01 Thread Dave Page
On Wed, 31 Aug 2022 at 17:13, Bruce Momjian  wrote:

> On Wed, Aug 31, 2022 at 05:02:33PM +0100, Dave Page wrote:
> >
> >
> > On Tue, 30 Aug 2022 at 19:46, Bruce Momjian  wrote:
> >
> > On Fri, Aug 26, 2022 at 02:05:36PM +0100, Dave Page wrote:
> > > On Thu, 25 Aug 2022 at 01:44, David Rowley 
> wrote:
> > > I don't have a particular opinion about the patch, I'm just
> pointing
> > > out that there are other ways. Even just writing down the
> numbers on
> > a
> > > post-it note and coming back in a month to see if they've
> changed is
> > > enough to tell if the table or index has been used.
> > >
> > >
> > > There are usually other ways to perform monitoring tasks, but
> there is
> > > something to be said for the convenience of having functionality
> built in
> > and
> > > not having to rely on tools, scripts, or post-it notes :-)
> >
> > Should we consider using something cheaper like time() so we don't
> need
> > a GUC to enable this?
> >
> >
> > Interesting idea, but on my mac at least, 100,000,000 gettimeofday()
> calls
> > takes about 2 seconds, whilst 100,000,000 time() calls takes 14(!)
> seconds.
>
> Wow.  I was just thinking you need second-level accuracy, which must be
> cheap somewhere.
>

Second-level accuracy would indeed be fine for this. Frankly, for my use
case just the date would be enough, but I can imagine people wanting
greater accuracy than that.

And yes, I was very surprised by the timing results I got as well. I guess
it's a quirk of macOS - on a Linux box I get ~4s for gettimeofday() and ~1s
for time().

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Tracking last scan time

2022-08-31 Thread Dave Page
On Tue, 30 Aug 2022 at 19:46, Bruce Momjian  wrote:

> On Fri, Aug 26, 2022 at 02:05:36PM +0100, Dave Page wrote:
> > On Thu, 25 Aug 2022 at 01:44, David Rowley  wrote:
> > I don't have a particular opinion about the patch, I'm just pointing
> > out that there are other ways. Even just writing down the numbers on
> a
> > post-it note and coming back in a month to see if they've changed is
> > enough to tell if the table or index has been used.
> >
> >
> > There are usually other ways to perform monitoring tasks, but there is
> > something to be said for the convenience of having functionality built
> in and
> > not having to rely on tools, scripts, or post-it notes :-)
>
> Should we consider using something cheaper like time() so we don't need
> a GUC to enable this?
>

Interesting idea, but on my mac at least, 100,000,000 gettimeofday() calls
takes about 2 seconds, whilst 100,000,000 time() calls takes 14(!) seconds.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Tracking last scan time

2022-08-26 Thread Dave Page
Hi

On Thu, 25 Aug 2022 at 01:44, David Rowley  wrote:

> On Thu, 25 Aug 2022 at 03:03, Bruce Momjian  wrote:
> >
> > On Wed, Aug 24, 2022 at 04:01:21PM +0100, Dave Page wrote:
> > > On Wed, 24 Aug 2022 at 15:18, Bruce Momjian  wrote:
> > > Would it be simpler to allow the sequential and index scan columns
> to be
> > > cleared so you can look later to see if it is non-zero?  Should we
> allow
> > >
> > > I don't think so, because then stat values wouldn't necessarily
> correlate with
> > > each other, and you wouldn't know when any of them were last reset
> unless we
> > > started tracking each individual reset. At least now you can see when
> they were
> > > all reset, and you know they were reset at the same time.
> >
> > Yeah, true.  I was more asking if these two columns are in some way
> > special or if people would want a more general solution, and if so, is
> > that something we want in core Postgres.
>
> Back when I used to do a bit of PostgreSQL DBA stuff, I had a nightly
> job setup to record the state of pg_stat_all_tables and put that into
> another table along with the current date. I then had a view that did
> some calculations with col - LAG(col) OVER (PARTITION BY relid ORDER
> BY date) to fetch the numerical values for each date.  I didn't ever
> want to reset the stats because it messes with autovacuum. If you zero
> out n_ins_since_vacuum more often than auto-vacuum would trigger, then
> bad things happen over time (we should really warn about that in the
> docs).
>
> I don't have a particular opinion about the patch, I'm just pointing
> out that there are other ways. Even just writing down the numbers on a
> post-it note and coming back in a month to see if they've changed is
> enough to tell if the table or index has been used.
>

There are usually other ways to perform monitoring tasks, but there is
something to be said for the convenience of having functionality built in
and not having to rely on tools, scripts, or post-it notes :-)


>
> We do also need to consider now that stats are stored in shared memory
> that any fields we add are in RAM.
>

That is a fair point. I believe this is both minimal, and useful though.

I've attached a v2 patch that incorporates Greg's suggestions.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


last_scan_v2.diff
Description: Binary data


Re: Tracking last scan time

2022-08-24 Thread Dave Page
On Wed, 24 Aug 2022 at 16:03, Bruce Momjian  wrote:

> On Wed, Aug 24, 2022 at 04:01:21PM +0100, Dave Page wrote:
> > On Wed, 24 Aug 2022 at 15:18, Bruce Momjian  wrote:
> >
> > On Tue, Aug 23, 2022 at 10:55:09AM +0100, Dave Page wrote:
> > > Often it is beneficial to review one's schema with a view to
> removing
> > indexes
> > > (and sometimes tables) that are no longer required. It's very
> difficult
> > to
> > > understand when that is the case by looking at the number of scans
> of a
> > > relation as, for example, an index may be used infrequently but
> may be
> > critical
> > > in those times when it is used.
> > >
> > > The attached patch against HEAD adds optional tracking of the last
> scan
> > time
> > > for relations. It updates pg_stat_*_tables with new last_seq_scan
> and
> > > last_idx_scan columns, and pg_stat_*_indexes with a last_idx_scan
> column
> > to
> > > help with this.
> >
> > Would it be simpler to allow the sequential and index scan columns
> to be
> > cleared so you can look later to see if it is non-zero?  Should we
> allow
> >
> > I don't think so, because then stat values wouldn't necessarily
> correlate with
> > each other, and you wouldn't know when any of them were last reset
> unless we
> > started tracking each individual reset. At least now you can see when
> they were
> > all reset, and you know they were reset at the same time.
>
> Yeah, true.  I was more asking if these two columns are in some way
> special or if people would want a more general solution, and if so, is
> that something we want in core Postgres.
>

They're special in the sense that they're the ones you're most likely going
to look at to see how much a relation is used I think (at least, I'd look
at them rather than the tuple counts).

There are certainly other things for which a last usage value may be
useful. Functions/procedures for example, or views. The benefits to
removing unused objects of that type are far, far lower than indexes or
tables of course.

There are other potential use cases for similar timestamps, such as object
creation times (and creating user), but they are more useful for auditing
than monitoring and optimisation.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Tracking last scan time

2022-08-24 Thread Dave Page
On Wed, 24 Aug 2022 at 15:18, Bruce Momjian  wrote:

> On Tue, Aug 23, 2022 at 10:55:09AM +0100, Dave Page wrote:
> > Often it is beneficial to review one's schema with a view to removing
> indexes
> > (and sometimes tables) that are no longer required. It's very difficult
> to
> > understand when that is the case by looking at the number of scans of a
> > relation as, for example, an index may be used infrequently but may be
> critical
> > in those times when it is used.
> >
> > The attached patch against HEAD adds optional tracking of the last scan
> time
> > for relations. It updates pg_stat_*_tables with new last_seq_scan and
> > last_idx_scan columns, and pg_stat_*_indexes with a last_idx_scan column
> to
> > help with this.
>
> Would it be simpler to allow the sequential and index scan columns to be
> cleared so you can look later to see if it is non-zero?  Should we allow
> arbitrary clearing of stat columns?
>

I don't think so, because then stat values wouldn't necessarily correlate
with each other, and you wouldn't know when any of them were last reset
unless we started tracking each individual reset. At least now you can see
when they were all reset, and you know they were reset at the same time.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Tracking last scan time

2022-08-24 Thread Dave Page
Hi

On Tue, 23 Aug 2022 at 13:07, Greg Stark  wrote:

> On Tue, 23 Aug 2022 at 11:00, Dave Page  wrote:
> >
> > Often it is beneficial to review one's schema with a view to removing
> indexes (and sometimes tables) that are no longer required. It's very
> difficult to understand when that is the case by looking at the number of
> scans of a relation as, for example, an index may be used infrequently but
> may be critical in those times when it is used.
>
> I think this is easy to answer in a prometheus/datadog/etc world since
> you can consult the history of the count to see when it was last
> incremented. (Or do effectively that continously).
>

Yes. But not every PostgreSQL instance is monitored in that way.


>
> I guess that just reinforces the idea that it should be optional.
> Perhaps there's room for some sort of general feature for controlling
> various time series aggregates like max() and min() sum() or, uhm,
> timeoflastchange() on whatever stats you want. That would let us
> remove a bunch of stuff from pg_stat_statements and let users turn on
> just the ones they want. And also let users enable things like time of
> last rollback or conflict etc. But that's just something to think
> about down the road.
>

It's certainly an interesting idea.


>
> > The attached patch against HEAD adds optional tracking of the last scan
> time for relations. It updates pg_stat_*_tables with new last_seq_scan and
> last_idx_scan columns, and pg_stat_*_indexes with a last_idx_scan column to
> help with this.
> >
> > Due to the use of gettimeofday(), those values are only maintained if a
> new GUC, track_scans, is set to on. By default, it is off.
>
> Bikeshedding warning -- "track_scans" could equally apply to almost
> any stats about scans. I think the really relevant thing here is the
> times, not the scans. I think the GUC should be "track_scan_times". Or
> could that still be confused with scan durations? Maybe
> "track_scan_timestamps"?
>

The latter seems reasonable.


>
> You could maybe make the gettimeofday cheaper by doing it less often.
> Like, skipping the increment if the old timestamp is newer than 1s
> before the transaction start time (I think that's available free if
> some other guc is enabled but I don't recall). Or isn't this cb
> normally happening after transaction end? So xactStopTimestamp might
> be available already?
>

Something like:

 if (pgstat_track_scan_timestamps && lstats->t_counts.t_numscans &&
tabentry->lastscan + USECS_PER_SEC <
GetCurrentTransactionStopTimestamp())
tabentry->lastscan = GetCurrentTimestamp();

?

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Tracking last scan time

2022-08-23 Thread Dave Page
Often it is beneficial to review one's schema with a view to removing
indexes (and sometimes tables) that are no longer required. It's very
difficult to understand when that is the case by looking at the number of
scans of a relation as, for example, an index may be used infrequently but
may be critical in those times when it is used.

The attached patch against HEAD adds optional tracking of the last scan
time for relations. It updates pg_stat_*_tables with new last_seq_scan and
last_idx_scan columns, and pg_stat_*_indexes with a last_idx_scan column to
help with this.

Due to the use of gettimeofday(), those values are only maintained if a new
GUC, track_scans, is set to on. By default, it is off.

I did run a 12 hour test to see what the performance impact is. pgbench was
run with scale factor 1 and 75 users across 4 identical bare metal
machines running Rocky 8 in parallel which showed roughly a -2% average
performance penalty against HEAD with track_scans enabled. Machines were
PowerEdge R7525's with 128GB RAM, dual 16C/32T AMD 7302 CPUs, with the data
directory on 6 x 800GB 12Gb/s SSD SAS drives in RAID 0. Kernel time source
is tsc.

   HEAD   track_scans  Penalty (%)
box1   19582.4973519341.8881  -1.22869541
box2   19936.5551319928.07479-0.04253664659
box3   19631.7889518649.64379-5.002830696
box4   19810.8676719420.67192-1.969604525
Average 19740.4272819335.06965-2.05343896

Doc and test updates included.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


last_scan_v1.diff
Description: Binary data


Re: pgsql: Default to hidden visibility for extension libraries where possi

2022-07-20 Thread Dave Page
On Wed, 20 Jul 2022 at 16:12, Tom Lane  wrote:

> Alvaro Herrera  writes:
> > On 2022-Jul-20, Tom Lane wrote:
> >> I'll try to do some research later today to identify anything else
> >> we need to mark in plpgsql.  I recall doing some work specifically
> >> creating functions for pldebugger's use, but I'll need to dig.
>
> > I suppose you're probably thinking of commit 53ef6c40f1e7; that didn't
> > expose functions directly, but through plpgsql_plugin_ptr.  Maybe that
> > one does need to be made PGDLLEXPORT, since currently it isn't.
>
> After some experimentation, it does not need to be marked: pldebugger
> gets at that via find_rendezvous_variable(), so there is no need for
> any explicit linkage at all between plpgsql.so and plugin_debugger.so.
>
> Along the way, I made a quick hack to get pldebugger to load into
> v15/HEAD.  It lacks #ifdef's which'd be needed so that it'd still
> compile against older branches, but perhaps this'll save someone
> some time.
>

Thanks Tom - I've pushed that patch with the relevant #ifdefs added.

-- 
Dave Page
PostgreSQL Core Team
http://www.postgresql.org/


Re: removing datlastsysoid

2022-05-16 Thread Dave Page
On Mon, 16 May 2022 at 15:06, David Steele  wrote:

> On 5/16/22 9:43 AM, Dave Page wrote:
> >
> >
> > On Thu, 20 Jan 2022 at 14:03, Robert Haas  > <mailto:robertmh...@gmail.com>> wrote:
> >
> > On Mon, Jan 17, 2022 at 3:43 PM Tom Lane  > <mailto:t...@sss.pgh.pa.us>> wrote:
> >  > +1.  Another reason to get rid of it is that it has nothing to do
> >  > with the system OID ranges defined in access/transam.h.
> >
> > Agreed. Thanks for looking. Committed.
> >
> >
> > So we just ran into this whilst updating pgAdmin to support PG15. How is
> > one supposed to figure out what the last system OID is now from an
> > arbitrary database? pgAdmin uses that value in well over 300 places in
> > its source.
>
> We ran into the same issue in pgBackRest. The old query that initdb used
> to generate these values is no good for PG15 since the template
> databases now have fixed low oids.
>
> Out solution was to use the constant:
>
> #define FirstNormalObjectId 16384
>
> And treat anything below that as a system oid. This constant has not
> changed in a very long time (if ever) but we added it to our list of
> constants to recheck with each release.
>

Yes, that seems reasonable. Changing that value would very likely break
pg_upgrade I can imagine, so I suspect it'll stay as it is for a while
longer.


>
> We used the initdb query to provide backward compatibility for older
> versions of pgbackrest using PG <= 14, but are using FirstNormalObjectId
> going forward.
>
> See
>
> https://github.com/pgbackrest/pgbackrest/commit/692fe496bdb5fa6dcffeb9f85b6188ceb1df707a
> for details.
>

 Thanks David!

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: removing datlastsysoid

2022-05-16 Thread Dave Page
On Thu, 20 Jan 2022 at 14:03, Robert Haas  wrote:

> On Mon, Jan 17, 2022 at 3:43 PM Tom Lane  wrote:
> > +1.  Another reason to get rid of it is that it has nothing to do
> > with the system OID ranges defined in access/transam.h.
>
> Agreed. Thanks for looking. Committed.
>

So we just ran into this whilst updating pgAdmin to support PG15. How is
one supposed to figure out what the last system OID is now from an
arbitrary database? pgAdmin uses that value in well over 300 places in its
source.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: gitmaster access

2022-05-12 Thread Dave Page
On Thu, 12 May 2022 at 08:03, Tatsuo Ishii  wrote:

> > At Thu, 12 May 2022 14:44:15 +0900 (JST), Tatsuo Ishii <
> is...@sraoss.co.jp> wrote in
> >> >> Thank you for the info, but unfortunately it hasn't worked.
> >> >> I'm going to try a slightly different steps..
> >> >
> >> > And finally I succeeded to clone from git.postgresql.org and to push
> a
> >> > commit.
> >>
> >> Is it git.postgresql.org, not gitmaster.postgresql.org? Interesting...
> >
> > git.postgresql.org.  I still receive "Permission denied" from
> > gitmaster.
>
> Ok. I learned that only postgresql.git should be accessed from
> gitmaster.postgresql.org. All other repos should be accessed from
> git.postgresql.org.


That is correct for PostgreSQL Committers such as yourself. Anyone else can
*only* use git.postgresql.org


>
> BTW,
> > I'm going to try a slightly different steps..
>
> Can you please tell me What you actually did? I am afraid of facing
> similar problem if I want to add another committer to pgpool2 repo.
>
> Best reagards,
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_en.php
> Japanese:http://www.sraoss.co.jp
>
-- 
-- 
Dave Page
https://pgsnake.blogspot.com

EDB Postgres
https://www.enterprisedb.com


Re: gitmaster access

2022-05-12 Thread Dave Page
On Thu, 12 May 2022 at 07:11, Kyotaro Horiguchi 
wrote:

> At Thu, 12 May 2022 14:44:15 +0900 (JST), Tatsuo Ishii 
> wrote in
> > >> Thank you for the info, but unfortunately it hasn't worked.
> > >> I'm going to try a slightly different steps..
> > >
> > > And finally I succeeded to clone from git.postgresql.org and to push a
> > > commit.


\o/


> >
> > Is it git.postgresql.org, not gitmaster.postgresql.org? Interesting...
>
> git.postgresql.org.  I still receive "Permission denied" from
> gitmaster.



Yes, gitmaster is completely irrelevant here. It is *only* used for
PostgreSQL itself, and only by PostgreSQL Committers.

The postgresql.git repo on git.postgresql.org is unique in that it is a
mirror of the real repository on gitmaster, and doesn’t have any committers
except for the account used to push commits from gitmaster. The third party
browser software doesn’t know anything about that which is why it still
shows the ssh:// URL despite it not being usable by anyone.

Is there some reason you thought gitmaster was relevant here (some webpage
for example)? This is the third(?) someone has been confused by gitmaster
recently, something both Magnus and I have been surprised by.
-- 
-- 
Dave Page
https://pgsnake.blogspot.com

EDB Postgres
https://www.enterprisedb.com


Re: gitmaster access

2022-05-11 Thread Dave Page
Hi

On Wed, 11 May 2022 at 13:56, Tatsuo Ishii  wrote:

> >> This does not work for me neither. However, in my case following works:
> >>
> >> ssh://g...@gitmaster.postgresql.org/pgtranslation/messages.git
> >
> >
> > If that works, then colour me confused because:
> >
> > gemulon:~# host gitmaster.postgresql.org
> > gitmaster.postgresql.org is an alias for gemulon.postgresql.org.
> > gemulon.postgresql.org has address 72.32.157.198
> > gemulon.postgresql.org has IPv6 address 2001:4800:3e1:1::198
> > gemulon:~# find / -name pgtranslation
> > gemulon:~# find / -name messages.git
> > gemulon:~# ls -al /home/git/repositories/
> > total 16
> > drwxr-xr-x 4 git git 4096 Jan  4  2020 .
> > drwxr-xr-x 8 git git 4096 May 11 09:03 ..
> > drwxr-xr-x 7 git git 4096 Jan  4  2020 mhatest.git
> > drwxr-sr-x 7 git git 4096 May 11 06:39 postgresql.git
> > gemulon:~#
>
> Sorry, I meant ssh://g...@gitmaster.postgresql.org/postgresql.git
> works, but ssh://g...@git.postgresql.org/postgresql.git does not work
> for me.
>

That is expected; no one has write access to that repo (and we only include
SSH keys for users with write access).

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: gitmaster access

2022-05-11 Thread Dave Page
On Wed, 11 May 2022 at 09:25, Kyotaro Horiguchi 
wrote:

> At Wed, 11 May 2022 09:08:26 +0100, Dave Page  wrote
> in
> > What is your community user ID?
>
> My community user name is "horiguti".
>

OK, so you have write access on the repo on git.postgresql.org, but I can't
find an SSH key for your account on the system. Can you check
https://www.postgresql.org/account/profile/ and make sure you've got the
correct SSH key in your profile? If you add one, it might take 10 minutes
or so to make its way to the git server.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: gitmaster access

2022-05-11 Thread Dave Page
Hi

On Wed, 11 May 2022 at 09:34, Tatsuo Ishii  wrote:

> > Hi
> >
> > On Wed, 11 May 2022 at 08:21, Kyotaro Horiguchi  >
> > wrote:
> >
> >> (Sorry in advance if this is off-topic of -hackers, and please head me
> >> to the right place if so.)
> >>
> >> I'm stuck by connection failure to gitmaster.
> >>
> >> I told that I already have the commit-bit on pgtranslation repository
> >> for the community account "horiguti".
> >>
> >> I did the following steps.
> >>
> >> 1. Add the public key for git-access to "SSH Key" field of "Edit User
> >>Profile" page.(https://www.postgresql.org/account/profile/) I did
> >>this more than few months ago.
> >>
> >> 2. Clone ssh://g...@gitmaster.postgresql.org/pgtranslation/messages.git.
> >>
> >
> > The correct repo is ssh://
> g...@git.postgresql.org/pgtranslation/messages.git.
>
> This does not work for me neither. However, in my case following works:
>
> ssh://g...@gitmaster.postgresql.org/pgtranslation/messages.git


If that works, then colour me confused because:

gemulon:~# host gitmaster.postgresql.org
gitmaster.postgresql.org is an alias for gemulon.postgresql.org.
gemulon.postgresql.org has address 72.32.157.198
gemulon.postgresql.org has IPv6 address 2001:4800:3e1:1::198
gemulon:~# find / -name pgtranslation
gemulon:~# find / -name messages.git
gemulon:~# ls -al /home/git/repositories/
total 16
drwxr-xr-x 4 git git 4096 Jan  4  2020 .
drwxr-xr-x 8 git git 4096 May 11 09:03 ..
drwxr-xr-x 7 git git 4096 Jan  4  2020 mhatest.git
drwxr-sr-x 7 git git 4096 May 11 06:39 postgresql.git
gemulon:~#


>
>
> Also Tom Lane said:
> On Sun, May 1, 2022 at 4:52 PM Tom Lane  wrote:
> > Tatsuo Ishii  writes:
> > > This is ok:
> > > git clone ssh://g...@gitmaster.postgresql.org/postgresql.git
> >
> > That's the thing to use if you're a committer.
>
> Best reagards,
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_en.php
> Japanese:http://www.sraoss.co.jp
>


-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: gitmaster access

2022-05-11 Thread Dave Page
Hi

On Wed, 11 May 2022 at 08:55, Kyotaro Horiguchi 
wrote:

> At Wed, 11 May 2022 08:46:40 +0100, Dave Page  wrote
> in
> > Hi
> >
> > On Wed, 11 May 2022 at 08:21, Kyotaro Horiguchi  >
> > wrote:
> > > 2. Clone ssh://g...@gitmaster.postgresql.org/pgtranslation/messages.git
> .
> > >
> >
> > The correct repo is ssh://
> g...@git.postgresql.org/pgtranslation/messages.git.
>
> Thanks for the reply.  I didn't wrote, but I have tried that and had
> the same result.
>

What is your community user ID?

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: gitmaster access

2022-05-11 Thread Dave Page
Hi

On Wed, 11 May 2022 at 08:21, Kyotaro Horiguchi 
wrote:

> (Sorry in advance if this is off-topic of -hackers, and please head me
> to the right place if so.)
>
> I'm stuck by connection failure to gitmaster.
>
> I told that I already have the commit-bit on pgtranslation repository
> for the community account "horiguti".
>
> I did the following steps.
>
> 1. Add the public key for git-access to "SSH Key" field of "Edit User
>Profile" page.(https://www.postgresql.org/account/profile/) I did
>this more than few months ago.
>
> 2. Clone ssh://g...@gitmaster.postgresql.org/pgtranslation/messages.git.
>

The correct repo is ssh://g...@git.postgresql.org/pgtranslation/messages.git.


>
> The problem for me here is I get "Permission denied" by the second
> step.
>
> The following is an extract of verbose log when I did:
>
> > GIT_SSH_COMMAND="ssh -" git clone ssh://
> g...@gitmaster.postgresql.org/pgtranslation/messages.git
>
> debug1: Authenticating to gitmaster.postgresql.org:22 as 'git'
> debug1: Offering public key: /home/horiguti/.ssh/postgresql ECDSA
> SHA256:zMOonb8...
> debug3: send packet: type 50
> debug2: we sent a publickey packet, wait for reply
> debug3: receive packet: type 51
>
> The account and host looks correct. The server returns 51
> (SSH_MSG_USERAUTH_FAILURE), which means the server didn't find my
> public key, but the fingerprint shown above coincides with that of the
> registered public key. I don't have a clue of the reason from my side.
>
> Please someone tell me what to do to get over the situation.
>
> regards.
>
> --
> Kyotaro Horiguchi
> NTT Open Source Software Center
>
>
>

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Windows now has fdatasync()

2022-04-08 Thread Dave Page
On Fri, 8 Apr 2022 at 05:41, Tom Lane  wrote:

> Michael Paquier  writes:
> > On Fri, Apr 08, 2022 at 02:56:15PM +1200, Thomas Munro wrote:
> >> I propose that we drop support for Windows versions older than
> >> 10/Server 2016 in the PostgreSQL 16 cycle,
>
> Do we have any data on what people are actually using?
>

None that I know of. Anecdotally, we dropped support for pgAdmin on Windows
< 8 (2012 for the server edition), and had a single complaint - and the
user happily acknowledged they were on an old release and expected support
to be dropped sooner or later. Windows 8 was a pretty unpopular release, so
I would expect shifting to 10/2016+ for PG 16 would be unlikely to be a
major problem.

FWIW, Python dropped support for < 8/2012 with v3.9.


>
> > Do you think that we could raise the minimum C standard on WIN32 to
> > C11, at least for MSVC?
>
> As long as the C11-isms are in MSVC-only code, it seems like this is
> exactly equivalent to setting a minimum MSVC version.  I don't see
> an objection-in-principle there, it's just a practical question of
> how far back is reasonable to support MSVC versions.  (That's very
> distinct from how far back we need the built code to run.)
>
> regards, tom lane
>
>
>

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Problem with moderation of messages with patched attached.

2022-03-21 Thread Dave Page
On Sun, 20 Mar 2022 at 13:52, Andrew Dunstan  wrote:

>
> On 3/19/22 14:48, Andres Freund wrote:
> > Hi,
> >
> > On 2022-03-03 13:37:35 +, Dave Page wrote:
> >> On Thu, 3 Mar 2022 at 13:28, Pavel Borisov 
> wrote:
> >>
> >>> The mail system doesn't have the capability to apply different
> moderation
> >>>> rules for people in that way I'm afraid.
> >>>>
> >>> Maybe then 2MB for everyone? Otherwise it's not so convenient. Lead to
> >>> answers before the questions in the thread [1], seems weird.
> >>>
> >> Then someone will complain if their patch is 2.1MB! How often are
> messages
> >> legitimately over 1MB anyway, even with a patch? I don't usually
> moderate
> >> -hackers, so I don't know if this is a common thing or not.
> > I don't think it's actually that rare. But most contributors writing that
> > large patchsets know about the limit and work around it - I gzip patches
> when
> > I see the email getting too large. But it's more annoying to work with
> for
> > reviewers.
> >
> > It's somewhat annoying. If you e.g. append a few graphs of performance
> changes
> > and a patch it's pretty easy to get into the range where compressing
> won't
> > help anymore.
> >
> > And sure, any limit may be hit by somebody. But 1MB across the whole
> email
> > seems pretty low these days.
> >
>
> Of course we could get complaints no matter what level we set the limit
> at. I think raising it to 2Mb would be a reasonable experiment. If no
> observable evil ensues then leave it that way. If it does then roll it
> back. I agree that plain uncompressed patches are easier to deal with in
> general.
>

Thanks for the reminder :-)

I've bumped the limit to 2MB.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Problem with moderation of messages with patched attached.

2022-03-03 Thread Dave Page
On Thu, 3 Mar 2022 at 13:28, Pavel Borisov  wrote:

> The mail system doesn't have the capability to apply different moderation
>> rules for people in that way I'm afraid.
>>
> Maybe then 2MB for everyone? Otherwise it's not so convenient. Lead to
> answers before the questions in the thread [1], seems weird.
>

Then someone will complain if their patch is 2.1MB! How often are messages
legitimately over 1MB anyway, even with a patch? I don't usually moderate
-hackers, so I don't know if this is a common thing or not.

I'll ping a message across to the sysadmin team anyway; I can't just change
that setting without buy-in from the rest of the team.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Problem with moderation of messages with patched attached.

2022-03-03 Thread Dave Page
On Thu, 3 Mar 2022 at 13:22, Pavel Borisov  wrote:

> Message to list pgsql-hackers held for moderation due to 'Size 1MB
>> (1061796 bytes) is larger than threshold 1000KB (1024000 bytes)', notice
>> queued for 2 moderators
>>
> Could you make this limit 2MB at least for authorized commitfest members?
> Thanks!
>

The mail system doesn't have the capability to apply different moderation
rules for people in that way I'm afraid.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Problem with moderation of messages with patched attached.

2022-03-03 Thread Dave Page
Hi

On Thu, 3 Mar 2022 at 12:31, Pavel Borisov  wrote:

> Hi, hackers!
>
> Around 2 months ago I've noticed a problem that messages containing
> patches in the thread [1] were always processed with manual moderation.
> They appear in hackers' thread hours after posting None of them are from
> new CF members and personally, I don't see a reason for such inconvenience.
> The problem still exists as of today.
>
> Can someone make changes in a moderation engine to make it more liberal
> and convenient for authors?
>
> [1]
> https://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com
>

Here's the moderation reason for that message:

Message to list pgsql-hackers held for moderation due to 'Size 1MB (1061796
bytes) is larger than threshold 1000KB (1024000 bytes)', notice queued for
2 moderators

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Observability in Postgres

2022-02-15 Thread Dave Page
rongly dislike the idea of building this around the
prometheus exporter format. Whilst that is certainly a useful format if
you're using prom (as many do), it does have limitations and quirks that
would make it painful for other systems to use; for example, the need to
encode non-numeric data into labels rather than the metrics themselves
(e.g. server version strings or LSNs). I would much prefer to see a common
format such as JSON used by default, and perhaps offer a hook to allow
alternate formatters to replace that. The prometheus format is also pretty
inefficient, as you have to repeat all the key data (labels) for each
individual metric.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: faulty link

2022-02-10 Thread Dave Page
On Thu, 10 Feb 2022 at 15:53, Dagfinn Ilmari Mannsåker 
wrote:

> Tom Lane  writes:
>
> > =?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?=  writes:
> >> čt 10. 2. 2022 v 15:35 odesílatel Erik Rijkers  napsal:
> >>> The provided link
> >>> https://www.postgresql.org/docs/release/
> >>> leads to
> >>> https://www.postgresql.org/docs/release/14.2/
> >>> which gives 'Not Found' for me (Netherlands)
> >
> >> Thinking about that again, the 14.2 release just happened. Could it be
> >> just a matter of propagating new release info to mirrors?
> >
> > The link works for me, too (USA).  Stale cache seems like a reasonable
> > explanation for the OP's problem --- maybe clearing browser cache
> > would help?
>
> I'm getting a 404 as well from London. After trying multiple times with
> curl I did get one 200 response, but it's mostly 404s.
>
> It looks like some of the mirrors have it, but not all:
>
> $ for h in $(dig +short -tA www.mirrors.postgresql.org); do echo -n "$h:
> "; curl -i -k -s -HHost:www.postgresql.org "https://$h/docs/release/14.2/;
> | grep ^HTTP; done
>  72.32.157.230: HTTP/2 200
>  87.238.57.232: HTTP/2 404
>  217.196.149.50: HTTP/2 200
>
> $ for h in $(dig +short -t www.mirrors.postgresql.org); do echo -n
> "$h: "; curl -i -k -s -HHost:www.postgresql.org 
> "https://[$h]/docs/release/14.2/;
> | grep ^HTTP; done
>  2001:4800:3e1:1::230: HTTP/2 200
>  2a02:c0:301:0:::32: HTTP/2 404
>  2a02:16a8:dc51::50: HTTP/2 200
>

Despite the name, they're not actually mirrors. They're varnish caches. By
the looks of it one of them cached a 404 (probably someone tried to access
the new page before it really did exist). I've purged /docs/release now,
and everything is returning 200.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: sepgsql logging

2022-01-12 Thread Dave Page
On Tue, Jan 11, 2022 at 5:55 PM Tom Lane  wrote:

> Andrew Dunstan  writes:
> > I am not that person either. I agree this looks reasonable, but I also
> > would like the opinion of an expert, if we have one.
>
> I'm not sure we do anymore.  Anyway, I tried this on Fedora 35 and
> confirmed that it compiles and the (very tedious) test process
> described in the sepgsql docs still passes.  Looking in the system's
> logs, it appears that Dave didn't precisely emulate how SELinux
> logs this setting, because I see messages like
>
> Jan  4 12:25:46 nuc1 audit[1754]: AVC avc:  denied  { setgid } for
> pid=1754 comm="sss_cache" capability=6
> scontext=unconfined_u:unconfined_r:useradd_t:s0-s0:c0.c1023
> tcontext=unconfined_u:unconfined_r:useradd_t:s0-s0:c0.c1023
> tclass=capability permissive=0
>
> So it looks like their plan is to unconditionally write "permissive=0"
> or "permissive=1", while Dave's patch just prints nothing in enforcing
> mode.  While I can see some virtue in brevity, I think that doing
> exactly what SELinux does is probably a better choice.  For one thing,
> it'd remove doubt about whether one is looking at a log from a sepgsql
> version that logs this or one that doesn't.
>
> Other than that nitpick, I think we should just push this.
>

Here's an update that adds the "permissive=0" case.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


sepgsql_permissive_logging_v2.diff
Description: Binary data


Re: sepgsql logging

2022-01-11 Thread Dave Page
Hi

On Tue, Jan 11, 2022 at 12:04 AM Jacob Champion 
wrote:

> On Wed, Apr 14, 2021 at 8:42 AM Dave Page  wrote:
> > Attached is a patch to clean this up. It will log denials as such
> > regardless of whether or not either selinux or sepgsql is in
> > permissive mode. When either is in permissive mode, it'll add "
> > permissive=1" to the end of the log messages. e.g.
>
> Dave,
>
> Just to clarify -- it looks like this patch *only* adds the
> "permissive=1" part, right? I don't see any changes around denied-vs-
> allowed.
>

Right. denied-vs-allowed is shown at the beginning of the log line. From my
earlier output:

2021-04-14 13:20:30.401 BST [23073] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column
name="column salt of table tb_users" permissive=1
2021-04-14 13:20:30.401 BST [23073] LOG:  SELinux: denied { select }
scontext=user_u:user_r:user_t:s0
tcontext=system_u:object_r:sepgsql_secret_table_t:s0 tclass=db_column
name="column phash of table tb_users" permissive=1


>
> I read the previous posts to mean that you were seeing "allowed" when
> you should have been seeing "denied".


That's what I *thought* was happening originally, because I was mistakenly
in permissive mode (if memory serves).


> I don't see that behavior --
> without this patch, I see the correct "denied" entries even when
> running in permissive mode. (It's been a while since the patch was
> posted, so I checked to make sure there hadn't been any relevant
> changes in the meantime, and none jumped out at me.)
>

Right. The point is that if permissive mode is enabled, access will not be
denied. Effectively if you see permissive=1, then "denied" really means
"would be denied if enforcing mode was enabled".

The idea is that you can run a production system in permissive mode to see
what would be denied without breaking things for users. You can use that
info to build your policy, and then when you no longer see any unexpected
denials in the logs, switch to enforcing mode.


>
> That said, the patch looks good as-is and seems to be working for me on
> a Rocky 8 VM. (You weren't kidding about the setup difficulty.) Having
> permissive mode show up in the logs seems very useful.
>
> As an aside, I don't see the "allowed" verbiage that sepgsql uses in
> any of the SELinux documentation. I do see third-party references to
> "granted", though, as in e.g.
>
> avc: granted { execute } for ...
>
> That's not something that I think this patch should touch, but it
> seemed tangentially relevant for future convergence work.
>

Interesting. I never spotted that one. I'm not sure it matters much, except
for consistency. It's not like the various tools for analyzing SELinux logs
would be likely to work on a PostgreSQL log.


>
> On Wed, 2021-04-14 at 09:49 -0400, Robert Haas wrote:
> > Looks superficially reasonable on first glance, but I think we should
> > try to get an opinion from someone who knows more about SELinux.
>
> I am not that someone, but this looks straightforward, it's been
> stalled for a while, and I think it should probably go in.
>

I'd like to see that. Thanks for the review.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: VS2022: Support Visual Studio 2022 on Windows

2021-11-24 Thread Dave Page
Hi

On Wed, Nov 24, 2021 at 11:36 AM Michael Paquier 
wrote:

> On Wed, Nov 24, 2021 at 10:00:19AM +0000, Dave Page wrote:
> > It's extremely unlikely that we'd shift to such a new version for PG15.
> We
> > build many components aside from PostgreSQL, and need to use the same
> > toolchain for all of them (we've had very painful experiences with mix n
> > match CRT versions in the past) so it's not just PG that needs to support
> > VS2022 as far as we're concerned
>
> Yes, I can understand that upgrading the base version of VS used is a
> very difficult exercise.  I have been through that, on Windows for
> Postgres..  As well as for the compilation of all its dependencies.
>
> > - Perl, Python, TCL, MIT Kerberos,
> > OpenSSL, libxml2, libxslt etc. are all built with the same toolchain for
> > consistency.
>
> Dave, do you include LZ4 in 14?  Just asking, as a matter of
> curiosity.
>

Yes we do :-)

C:\Program Files\PostgreSQL\14\bin>pg_config
BINDIR = C:/PROGRA~1/POSTGR~1/14/bin
DOCDIR = C:/PROGRA~1/POSTGR~1/14/doc
HTMLDIR = C:/PROGRA~1/POSTGR~1/14/doc
INCLUDEDIR = C:/PROGRA~1/POSTGR~1/14/include
PKGINCLUDEDIR = C:/PROGRA~1/POSTGR~1/14/include
INCLUDEDIR-SERVER = C:/PROGRA~1/POSTGR~1/14/include/server
LIBDIR = C:/Program Files/PostgreSQL/14/lib
PKGLIBDIR = C:/Program Files/PostgreSQL/14/lib
LOCALEDIR = C:/PROGRA~1/POSTGR~1/14/share/locale
MANDIR = C:/Program Files/PostgreSQL/14/man
SHAREDIR = C:/PROGRA~1/POSTGR~1/14/share
SYSCONFDIR = C:/Program Files/PostgreSQL/14/etc
PGXS = C:/Program Files/PostgreSQL/14/lib/pgxs/src/makefiles/pgxs.mk
CONFIGURE = --enable-thread-safety --enable-nls --with-ldap
--with-ssl=openssl --with-uuid --with-libxml --with-libxslt --with-lz4
--with-icu --with-tcl --with-perl --with-python
CC = not recorded
CPPFLAGS = not recorded
CFLAGS = not recorded
CFLAGS_SL = not recorded
LDFLAGS = not recorded
LDFLAGS_EX = not recorded
LDFLAGS_SL = not recorded
LIBS = not recorded
VERSION = PostgreSQL 14.1

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: VS2022: Support Visual Studio 2022 on Windows

2021-11-24 Thread Dave Page
Hi

On Wed, Nov 24, 2021 at 9:12 AM Hans Buschmann  wrote:

> Hello Michael,
>
> thanks for your hard work and quick response!
> It is very convenient to only use VS2022 for Windows from now on...
>
> >Diff unrelated to your patch.
>
> Sorry for the copysoft problem from the first version.
>
> >Glad to see that we should have nothing to do about locales this
> >time.  I have not tested, but I think that you covering all the areas
> >that need a refresh here.  Nice work.
>
> I think it is almost impossible to overestimate the value of such support
> from experienced hackers to others starting their journey right now...
>
> I hope I can motivate you (and other experienced hackers) to give me some
> more support on my real project arriving anytime soon. It addresses
> hex_encoding (and more) targetting mostly pg_dump, but requires also some
> deeper knowledge of general infrastructure and building (also on Windows).
> Stay tuned!
>
> PS: Does anybody have good relations to EDB suggesting them to target
> VS2022 as the build environment for the upcoming PG15 release?
>

That would be me...


>
> postgres=# select version ();
>   version
> 
>  PostgreSQL 14.1, compiled by Visual C++ build 1931, 64-bit
> (1 row)
>

It's extremely unlikely that we'd shift to such a new version for PG15. We
build many components aside from PostgreSQL, and need to use the same
toolchain for all of them (we've had very painful experiences with mix n
match CRT versions in the past) so it's not just PG that needs to support
VS2022 as far as we're concerned - Perl, Python, TCL, MIT Kerberos,
OpenSSL, libxml2, libxslt etc. are all built with the same toolchain for
consistency.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: [PATCH] Proposal for HIDDEN/INVISIBLE column

2021-10-14 Thread Dave Page
On Thu, Oct 14, 2021 at 2:32 PM Gilles Darold  wrote:

> Le 14/10/2021 à 14:28, Pavel Stehule a écrit :
>
>
>
> čt 14. 10. 2021 v 14:13 odesílatel Vik Fearing 
> napsal:
>
>> On 10/14/21 1:47 PM, Aleksander Alekseev wrote:
>> > Hi Gilles,
>> >
>> >> Any though and interest in this feature?
>> >
>> > Personally, I wouldn't call this feature particularly useful. `SELECT
>> > *` is intended for people who are working with DBMS directly e.g. via
>> > psql and want to see ALL columns.
>>
>> I disagree strongly with this.  It is really annoying when working
>> interactively with psql on a table that has a PostGIS geometry column,
>> or any other large blobby type column.
>>
>> I have not looked at the patch, but +1 for the feature.
>>
>
> Cannot be better to redefine some strategies for output for some types.
>
> I can agree so sometimes in some environments proposed features can be
> nice, but it can be a strong footgun too.
>
> Maybe some strange data can be filtered in psql and it can be better
> solution. I agree, so usually print long geometry in psql is useless.
>
>
> Pavel this doesn't concern only output but input too, think about the
> INSERT or COPY without a column list. We can add such filter in psql but
> how about other clients? They all have to implement their own filtering
> method. I think the HIDDEN attribute provide a common and basic way to
> implement that in all client application.
>

I like the idea - being able to hide computed columns such as tsvectors
from CRUD queries by default seems like it would be very nice for example.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: "an SQL" vs. "a SQL"

2021-06-10 Thread Dave Page
On Thu, Jun 10, 2021 at 9:31 AM Peter Eisentraut <
peter.eisentr...@enterprisedb.com> wrote:

> On 10.06.21 09:26, David Rowley wrote:
> > It seems we have no standard as to if we say "a SQL" or "an SQL".
>
> The SQL standard uses "an SQL-something".
>

I use both commonly, but the argument for "an S-Q-L ..." is strong I think
- and I definitely think consistency is good.


>
> > However, we mostly use "an SQL"  in the docs.
> >
> > ~/pg_src$ cd doc/
> > ~/pg_src/doc$ git grep -E "\s(a|A)\sSQL\s" | wc -l
> > 55
> > ~/pg_src/doc$ git grep -E "\s(an|An)\sSQL\s" | wc -l
> > 94
> >
> > I think we should change all 55 instances of "a SQL" in the docs to
> > use "an SQL" and leave the 800 other instances of "a SQL" alone.
> > Changing those does not seem worthwhile as it could cause
> > back-patching pain.
>
> agreed
>

+1 in general, though I would perhaps suggest extending to any user-visible
messages in the code. I don't think there's any point in messing with
comments etc. I'm not sure what that would do to the numbers though.


>
> > Further, there might be a few more in the docs that we might want to
> > consider changing:
> >
> > git grep -E "\sa\s(A|E|F|H|I|L|M|N|O|S|X)[A-Z]{2,5}\s"
> >
> > I see "a FSM", "a FIFO", "a SSPI", "a SASL", "a MCV", "a SHA", "a SQLDA"
> >
> > My regex foo is not strong enough to think how I might find multiline
> instances.
>
> Um, of those, I pronounce FIFO, SASL, and SHA as words, with an "a"
> article.
>

Same here. I've never heard anyone try to pronounce SSPI, so I would expect
that to be "an SSPI ...". The other remaining ones (FSM, MCV & SQLDA) I
would also argue aren't pronounceable, so should use the "an" article.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: [PATCH v3 1/1] Fix detection of preadv/pwritev support for OSX.

2021-05-20 Thread Dave Page
On Tue, Mar 30, 2021 at 6:58 AM Tom Lane  wrote:

> Thomas Munro  writes:
> > I'll move it when committing.  I'll let this patch sit for another day
> > to see if any other objections show up.
>
> FWIW, I remain fairly strongly against this, precisely because of the
> point that it requires us to start using a randomly different
> feature-probing technology anytime Apple decides that they're going to
> implement some standard API that they didn't before.  Even if it works
> everywhere for preadv/pwritev (which we won't know in advance of
> buildfarm testing, and maybe not then, since detection failures will
> probably be silent), it seems likely that we'll hit some case in the
> future where this interacts badly with some other platform's weirdness.
> We haven't claimed in the past to support MACOSX_DEPLOYMENT_TARGET,
> and I'm not sure we should start now.  How many people actually care
> about that?
>

I missed this earlier - it's come to my attention through a thread on the
-packagers list. Adding my response on that thread here for this audience:

The ability to target older releases with a newer SDK is essential for
packages such as the EDB PostgreSQL installers and the pgAdmin community
installers. It's very difficult (sometimes impossible) to get older OS
versions on new machines now - Apple make it very hard to download old
versions of macOS (some can be found, others not), and they won't always
work on newer hardware anyway so it's really not feasible to have all the
build machines running the oldest version that needs to be supported.

FYI, the pgAdmin and PG installer buildfarms have
-mmacosx-version-min=10.12 in CFLAGS etc. to handle this, which is
synonymous with MACOSX_DEPLOYMENT_TARGET. We've been successfully building
packages that way for a decade or more.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: PATCH: Add GSSAPI ccache_name option to libpq

2021-04-22 Thread Dave Page
On Thu, Apr 22, 2021 at 1:55 AM Stephen Frost  wrote:

> Greetings,
>
> * Daniel Carter (danielchriscarter+postg...@gmail.com) wrote:
> > On 21/04/2021 18:40, Stephen Frost wrote:
> > >I surely hope that the intent here is to use Negotiate / SPNEGO to
> > >authenticate the user who is connecting to the webserver and then have
> > >credentials delegated (ideally through constrained credential
> > >delegation..) to the web server by the user for the web application to
> > >use to connect to the PG server.
> > >
> > >I certainly don't think we should be targetting a solution where the
> > >application is acquiring credentials from the KDC directly using a
> > >user's username/password, that's very strongly discouraged for the very
> > >good reason that it means the user's password is being passed around.
> >
> > Indeed -- that's certainly not the intended aim of this patch!
>
> Glad to hear that. :)
>
> > >>There may well be a better way of going about this -- it's just that I
> can't
> > >>currently see an obvious way to get this kind of setup working using
> only
> > >>the environment variable.
> > >
> > >Perhaps you could provide a bit more information about what you're
> > >specifically doing here?  Again, with something like apache's
> > >mod_auth_gssapi, it's a matter of just installing that module and then
> > >the user will be authenticated by the web server itself, including
> > >managing of delegated credentials, setting of the environment variables,
> > >and the web application shouldn't have to do anything but use libpq to
> > >request a connection and if PG's configured with gssapi auth, it'll all
> > >'just work'.  Only thing I can think of offhand is that you might have
> > >to take AUTH_USER and pass that to libpq as the user's username to
> > >connect with and maybe get from the user what database to request the
> > >connection to..
> >
> > Hmm, yes -- something like that is definitely a neater way of doing
> things
> > in the web app scenario (I'd been working on the principle that the
> username
> > and credential cache were "provided" from the same place, i.e. the web
> app,
> > but as you point out that's not actually necessary).
>
> Yeah, that's really how web apps should be doing this.
>
> > However, it seems like there might be some interest in this for other
> > scenarios (e.g. with relation to multi-threaded applications where more
> > precise control of which thread uses which credential cache is useful),
> so
> > possibly this may still be worth continuing with even if it has a
> slightly
> > different intended purpose to what was originally planned?
>
> I'd want to hear the actual use-case rather than just hand-waving that
> "oh, this might be useful for this threaded app that might exist some
> day"...
>

I thought I gave that precise use case upthread. As you know, we've been
adding Kerberos support to pgAdmin. When running in server mode, we have
multiple users logging into a single instance of the application, and we
need to cache credentials for them to be used to login to the PostgreSQL
servers, using libpq that is on the pgAdmin server. For obvious reasons, we
want to use separate credential caches for each pgAdmin user, and currently
that means having a mutex around every use of the caches, so we can be sure
we're safely manipulating the environment, using the correct cache, and
then continuing as normal once we're done.

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: PATCH: Add GSSAPI ccache_name option to libpq

2021-04-21 Thread Dave Page
Hi

On Tue, Apr 20, 2021 at 8:44 PM Daniel Carter <
danielchriscarter+postg...@gmail.com> wrote:

> Hi Stephen,
>
> On 20/04/2021 20:01, Stephen Frost wrote:
> > I'm not necessarily against this, but typically the GSSAPI library
> > provides a way for you to control this using, eg, the KRB5_CCACHE
> > environment variable.  Is there some reason why that couldn't be used..?
>
> The original motivation for investigating this was setting up a web app
> which could authenticate to a database server using a Kerberos ticket.
> Since the web framework already needs to create a connection string
> (with database name etc.) to set up the database connection, having an
> option here for the ccache location makes it much more straightforward
> to specify than having to save data out to environment variables (and
> makes things cleaner if there are potentially multiple database
> connections going on at once in different processes).
>

Yes, that's why we'd like it for pgAdmin. When dealing with a
multi-threaded application it becomes a pain keeping credentials for
different users separated; a lot more mucking about with mutexes etc. If we
could specify the credential cache location in the connection string, it
would be much easier (and likely more performant) to securely keep
individual caches for each user.


>
> There may well be a better way of going about this -- it's just that I
> can't currently see an obvious way to get this kind of setup working
> using only the environment variable.
>
> Many thanks,
> Daniel
>
>
>

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: PATCH: Add GSSAPI ccache_name option to libpq

2021-04-20 Thread Dave Page
Hi

On Tue, Apr 20, 2021 at 10:37 AM Daniel Carter <
danielchriscarter+postg...@gmail.com> wrote:

> Hi,
>
> This is a small patch (against master) to allow an application using
> libpq with GSSAPI authentication to specify where to fetch the
> credential cache from -- it effectively consists of a new field in
> PQconninfoOptions to store this data and (where the user has specified a
> ccache location) a call into the gss_krb5_ccache_name function in the
> GSSAPI library.
>

The pgAdmin team would love to have this feature. It would greatly simplify
management of multiple connections from different users.


>
> It's my first go at submitting a patch -- it works as far as I can tell,
> but I suspect there will probably still be stuff to fix before it's
> ready to use!
>
> As far as I'm concerned this is working (the code compiles successfully
> following "./configure --with-gssapi --enable-cassert", and seems to
> work for specifying the ccache location without any noticeable errors).
>
> I hope there shouldn't be anything platform-specific here (I've been
> working on Ubuntu Linux but the only interactions with external
> applications are via the GSSAPI library, which was already in use).
>
> The dispsize value for ccache_name is 64 in this code (which seems to be
> what's used with other file-path-like parameters in the existing code)
> but I'm happy to have this corrected if it needs a different value -- as
> far as I can tell this is just for display purposes rather than anything
> critical in terms of actually storing the value?
>
> If no ccache_name is specified in the connection string then it defaults
> to NULL, which means the gss_krb5_ccache_name call is not made and the
> current behaviour (of letting the GSSAPI library work out the location
> of the ccache) is not changed.
>
> Many thanks,
> Daniel
>
>

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: Windows default locale vs initdb

2021-04-19 Thread Dave Page
On Mon, Apr 19, 2021 at 11:52 AM Andrew Dunstan  wrote:

>
> My understanding from Microsoft staff at conferences is that Azure's
> PostgreSQL SAS runs on  linux, not WIndows.
>

This is from a regular Azure Database for PostgreSQL single server:

postgres=> select version();
  version

 PostgreSQL 11.6, compiled by Visual C++ build 1800, 64-bit
(1 row)

And this is from the new Flexible Server preview:

postgres=> select version();
 version

-
 PostgreSQL 12.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu
5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609, 64-bit
(1 row)

So I guess it's a case of "it depends".

-- 
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com


Re: sepgsql logging

2021-04-14 Thread Dave Page
Hi

On Thu, Apr 1, 2021 at 3:30 PM Dave Page  wrote:

>
>
> On Thu, Apr 1, 2021 at 3:23 PM Tom Lane  wrote:
>
>> Andrew Dunstan  writes:
>> > On 4/1/21 8:32 AM, Dave Page wrote:
>> >> It seems to me that sepgsql should also log the denial, but flag that
>> >> permissive mode is on.
>>
>> > +1 for doing what selinux does if possible.
>>
>> +1.  If selinux itself is doing that, it's hard to see a reason why
>> we should not; and I concur that the info is useful.
>>
>
> Thanks both. I'll take a look at the code and see if I can whip up a patch
> (it'll be a week or so as I'm taking some time off for Easter).
>

Attached is a patch to clean this up. It will log denials as such
regardless of whether or not either selinux or sepgsql is in permissive
mode. When either is in permissive mode, it'll add " permissive=1" to the
end of the log messages. e.g.

Regular user in permissive mode, with a restricted table column:

2021-04-14 13:20:30.401 BST [23073] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_table
name="public.tb_users" permissive=1
2021-04-14 13:20:30.401 BST [23073] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:20:30.401 BST [23073] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column
name="column uid of table tb_users" permissive=1
2021-04-14 13:20:30.401 BST [23073] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:20:30.401 BST [23073] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column
name="column name of table tb_users" permissive=1
2021-04-14 13:20:30.401 BST [23073] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:20:30.401 BST [23073] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column
name="column mail of table tb_users" permissive=1
2021-04-14 13:20:30.401 BST [23073] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:20:30.401 BST [23073] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column
name="column address of table tb_users" permissive=1
2021-04-14 13:20:30.401 BST [23073] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:20:30.401 BST [23073] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column
name="column salt of table tb_users" permissive=1
2021-04-14 13:20:30.401 BST [23073] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:20:30.401 BST [23073] LOG:  SELinux: denied { select }
scontext=user_u:user_r:user_t:s0
tcontext=system_u:object_r:sepgsql_secret_table_t:s0 tclass=db_column
name="column phash of table tb_users" permissive=1
2021-04-14 13:20:30.401 BST [23073] STATEMENT:  SELECT * FROM tb_users;

The same user/table, but in enforcing mode:

2021-04-14 13:17:21.645 BST [22974] LOG:  SELinux: allowed { search }
scontext=user_u:user_r:user_t:s0
tcontext=system_u:object_r:sepgsql_schema_t:s0 tclass=db_schema
name="public" at character 15
2021-04-14 13:17:21.645 BST [22974] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:17:21.646 BST [22974] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_table
name="public.tb_users"
2021-04-14 13:17:21.646 BST [22974] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:17:21.646 BST [22974] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column
name="column uid of table tb_users"
2021-04-14 13:17:21.646 BST [22974] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:17:21.646 BST [22974] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column
name="column name of table tb_users"
2021-04-14 13:17:21.646 BST [22974] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:17:21.646 BST [22974] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column
name="column mail of table tb_users"
2021-04-14 13:17:21.646 BST [22974] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:17:21.646 BST [22974] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column
name="column address of table tb_users"
2021-04-14 13:17:21.646 BST [22974] STATEMENT:  SELECT * FROM tb_users;
2021-04-14 13:17:21.646 BST [22974] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:u

Re: More sepgsql weirdness

2021-04-14 Thread Dave Page
Hi

On Tue, Apr 13, 2021 at 6:22 PM Robert Haas  wrote:

> On Tue, Apr 13, 2021 at 10:33 AM Dave Page  wrote:
> > On a system with selinux and sepgsql configured, search path resolution
> appears to fail if sepgsql is in enforcing mode, but selinux is in
> permissive mode (which, as I understand it, should cause sepgsql to behave
> as if it's in permissive mode anyway - and does for other operations).
> Regardless of whether my understanding of the interaction of the two
> permissive modes is correct, I don't believe the following should happen:
>
> I agree that this sounds like something which shouldn't happen if the
> system is in permissive mode,


I realised that my test database hadn't had the sepgsql SQL script run in
it (I must have created it before running it on template1). I guess the
error was caused by lack of proper labelling.

So, clearly my fault, but I think there are a couple of things we need to
do here:

1) Improve the docs for sepgsql. The *only* vaguely useful source of info
I've found on using this is "SELinux System Administration", a Packt book
by Sven Vermeulen. Our own docs don't even list the supported object
classes (e.g. db_table) or types (e.g. sepgsql_ro_table_t) for example.

2) Improve the way we handle cases like the one I ran into. I only realised
what was going on when I tried to run sepgsql_getcon() to confirm I was
running in undefined_t. Clearly very weird things can happen if labelling
hasn't been run; perhaps we could raise a notice if the sepgsql module is
loaded but sepgsql_getcon() isn't present (though that seems flakey at
best)? I'd hesitate to try to check for the presence of one or more labels
as the admin could have intentionally removed them or changed them of
course.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


More sepgsql weirdness

2021-04-13 Thread Dave Page
On a system with selinux and sepgsql configured, search path resolution
appears to fail if sepgsql is in enforcing mode, but selinux is in
permissive mode (which, as I understand it, should cause sepgsql to behave
as if it's in permissive mode anyway - and does for other operations).
Regardless of whether my understanding of the interaction of the two
permissive modes is correct, I don't believe the following should happen:

mls=# SELECT current_user;

 current_user

--

 postgres

(1 row)


mls=# SHOW search_path;

   search_path

-

 "$user", public

(1 row)


mls=# \dn+ public

  List of schemas

  Name  |  Owner   |  Access privileges   |  Description

+--+--+

 public | postgres | postgres=UC/postgres+| standard public schema

|  | =UC/postgres |

(1 row)


mls=# CREATE TABLE tb_users(uid int primary key, name text, mail text,
address text, salt text, phash text);

ERROR:  no schema has been selected to create in

LINE 1: CREATE TABLE tb_users(uid int primary key, name text, mail t...

 ^

mls=# CREATE TABLE public.tb_users(uid int primary key, name text, mail
text, address text, salt text, phash text);

CREATE TABLE

mls=# drop table tb_users;

ERROR:  table "tb_users" does not exist

mls=# drop table public.tb_users;

DROP TABLE

This is on head, pulled yesterday.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: sepgsql logging

2021-04-01 Thread Dave Page
On Thu, Apr 1, 2021 at 3:23 PM Tom Lane  wrote:

> Andrew Dunstan  writes:
> > On 4/1/21 8:32 AM, Dave Page wrote:
> >> It seems to me that sepgsql should also log the denial, but flag that
> >> permissive mode is on.
>
> > +1 for doing what selinux does if possible.
>
> +1.  If selinux itself is doing that, it's hard to see a reason why
> we should not; and I concur that the info is useful.
>

Thanks both. I'll take a look at the code and see if I can whip up a patch
(it'll be a week or so as I'm taking some time off for Easter).

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


sepgsql logging

2021-04-01 Thread Dave Page
Hi

I've been trying to figure out selinux with sepgsql (which is proving quite
difficult as there is an almost total lack of documentation/blogs etc. on
the topic) and ran into an issue. Whilst my system had selinux in enforcing
mode, I mistakenly had sepgsql in permissive mode. I created a table and
restricted access to one column to regular users using the label
system_u:object_r:sepgsql_secret_table_t:s0. Because sepgsql was in
permissive mode, my test user could still access the restricted column.

Postgres logged this:

2021-03-31 17:12:29.713 BST [3917] LOG:  SELinux: allowed { select }
scontext=user_u:user_r:user_t:s0
tcontext=system_u:object_r:sepgsql_secret_table_t:s0 tclass=db_column
name="column private of table t1"

That's very confusing, because the norm in selinux is to log denials as if
the system were in enforcing mode, but then allow the action to proceed
anyway, when in permissive mode. For example, log entries such as this are
created when my restricted user tries to run an executable from /tmp after
running "setsebool -P user_exec_content off":

type=AVC msg=audit(1617278924.917:484): avc:  denied  { execute } for
 pid=53036 comm="bash" name="ls" dev="dm-0" ino=319727
scontext=user_u:user_r:user_t:s0 tcontext=user_u:object_r:user_tmp_t:s0
tclass=file permissive=1

The point being to let the admin know what would fail if the system were
switched to enforcing mode. Whilst that wasn't the point of what I was
trying to do, such a message would have indicated to me that I was in
permissive mode without realising.

It seems to me that sepgsql should also log the denial, but flag that
permissive mode is on.

Any reason not to do that?

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: Heads-up: macOS Big Sur upgrade breaks EDB PostgreSQL installations

2020-11-17 Thread Dave Page
FYI, both Jonathan and I have now tested this on additional machines and
have been unable to reproduce the issue, so it seems like something odd
happened on my original upgrade rather than a general issue.

Apologies for the noise.

On Mon, Nov 16, 2020 at 9:27 AM Dave Page  wrote:

> Hi,
>
> This is more of a head-ups than anything else, as I suspect this may come
> up in various forums.
>
> The PostgreSQL installers for macOS (from EDB, possibly others too) create
> the data directory in /Library/PostgreSQL//data. This has been
> the case since the first release, 10+ years ago.
>
> It looks like the Big Sur upgrade has taken it upon itself to "fix" any
> filesystem permissions it doesn't like. On my system, this resulted in the
> data directory having 0755 permissions, which meant that PostgreSQL refused
> to start. Manually changing the permissions back to 0700 (0750 should also
> work) fixes the issue.
>
> I'm not sure there's much we can do about this - systems that are likely
> to be affected are already out there, and we obviously don't want to relax
> the permissions Postgres requires.
>
> --
> Dave Page
> Blog: http://pgsnake.blogspot.com
> Twitter: @pgsnake
>
> EDB: http://www.enterprisedb.com
>
>

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: Heads-up: macOS Big Sur upgrade breaks EDB PostgreSQL installations

2020-11-16 Thread Dave Page
On Mon, Nov 16, 2020 at 4:45 PM Pavel Borisov 
wrote:

> I suppose there are many ways to have PG on OSX i.e. package managers
> (Homebrew, Macports), App installers etc and so many places anyone can find
> his data directory reside in. Generally I prefer data directory to be
> somewhere inside the user home dir as OSX will take care of possible
> backups and will not generally modify its contents during migration betweeb
> osx versions and/or different machines. It is not only the question of
> permissions.
>
> Any options inside user homedir are equally suitable IMO.
>

It is in the user's homedir - it's just that that isn't under /Users:

hal:~ postgres$ echo $HOME
/Library/PostgreSQL/13

With the EDB installers (unlike postgres.app), PostgreSQL runs as a
service, much as it would on Linux or BSD.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: Heads-up: macOS Big Sur upgrade breaks EDB PostgreSQL installations

2020-11-16 Thread Dave Page
On Mon, Nov 16, 2020 at 3:55 PM Jonathan S. Katz 
wrote:

> On 11/16/20 4:27 AM, Dave Page wrote:
> > Hi,
> >
> > This is more of a head-ups than anything else, as I suspect this may
> > come up in various forums.
> >
> > The PostgreSQL installers for macOS (from EDB, possibly others too)
> > create the data directory in /Library/PostgreSQL//data. This
> > has been the case since the first release, 10+ years ago.
> >
> > It looks like the Big Sur upgrade has taken it upon itself to "fix" any
> > filesystem permissions it doesn't like. On my system, this resulted in
> > the data directory having 0755 permissions, which meant that PostgreSQL
> > refused to start. Manually changing the permissions back to 0700 (0750
> > should also work) fixes the issue.
> >
> > I'm not sure there's much we can do about this - systems that are likely
> > to be affected are already out there, and we obviously don't want to
> > relax the permissions Postgres requires.
>
> Thanks for raising this. We should provide some guidance on upgrading
> this when upgrading to Big Sur.
>
> Do we know where the other macOS installers place their data
> directories? We should reach out to the installer maintainers to see if
> they are seeing the same behavior so we know what guidance to issue.
>

I believe postgres.app only installs for the current user, and puts it's
data under ~/Library/Application Support/Postgres.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Heads-up: macOS Big Sur upgrade breaks EDB PostgreSQL installations

2020-11-16 Thread Dave Page
Hi,

This is more of a head-ups than anything else, as I suspect this may come
up in various forums.

The PostgreSQL installers for macOS (from EDB, possibly others too) create
the data directory in /Library/PostgreSQL//data. This has been
the case since the first release, 10+ years ago.

It looks like the Big Sur upgrade has taken it upon itself to "fix" any
filesystem permissions it doesn't like. On my system, this resulted in the
data directory having 0755 permissions, which meant that PostgreSQL refused
to start. Manually changing the permissions back to 0700 (0750 should also
work) fixes the issue.

I'm not sure there's much we can do about this - systems that are likely to
be affected are already out there, and we obviously don't want to relax the
permissions Postgres requires.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: git clone failed in windows

2020-10-23 Thread Dave Page
On Fri, Oct 23, 2020 at 1:39 PM Amit Kapila  wrote:

> On Fri, Oct 23, 2020 at 4:39 PM Sridhar N Bamandlapally
>  wrote:
> >
> > Am trying to clone postgresql git, getting error
> >
> > D:\sridhar>git clone https://git.postgresql.org/git/postgresql.git
> > Cloning into 'postgresql'...
> > remote: Enumerating objects: 806507, done.
> > remote: Counting objects: 100% (806507/806507), done.
> > remote: Compressing objects: 100% (122861/122861), done.
> > error: RPC failed; curl 18 transfer closed with 3265264 bytes remaining
> to read
> > fatal: the remote end hung up unexpectedly
> > fatal: early EOF
> > fatal: index-pack failed
> >
>
> I have also just tried this and it failed with same error. However, it
> worked when I tried 'git clone
> git://git.postgresql.org/git/postgresql.git'. I don't know what is the
> issue.
>

It worked for me with https. Can you try again? It may be that the Varnish
cache was doing it's meditation thing for some reason. I can't see anything
obvious on the system though - nothing in the logs, and the services have
all been up for days.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: Kerberos support broken on MSVC builds for Windows x64?

2020-09-03 Thread Dave Page
On Thu, Sep 3, 2020 at 4:15 PM Dave Page  wrote:

>
> So having rebuilt PostgreSQL against that, I'm now in the situation where
> the server never even attempts to get a ticket as far as I can see, and
> psql just crashes with nothing more than a useless error in the event log:
>
> Faulting application name: psql.exe, version: 14.0.0.20246, time stamp:
> 0x5f50e477
> Faulting module name: unknown, version: 0.0.0.0, time stamp: 0x
> Exception code: 0xc005
> Fault offset: 0x
> Faulting process id: 0xd10
> Faulting application start time: 0x01d681f189a17360
> Faulting application path: C:\pg\bin\psql.exe
> Faulting module path: unknown
> Report Id: eb68d787-1c82-420d-8878-bc0648932a5d
> Faulting package full name:
> Faulting package-relative application ID:
>
> So I'm going to have to break out the debugger, though I suspect this may
> require more effort than I have time for right now.
>

Yeah, this is almost certainly well beyond what I have the time to figure
out. Happy to do any testing etc. that may be needed, but I think this
needs someone familiar with the GSS API to take the lead.

Here's what I got from psql in the debugger:

Exception thrown at 0x in psql.exe: 0xC005: Access
violation executing location 0x. occurred

()
krb5_64.dll!51942807()
krb5_64.dll!5194214b()
krb5_64.dll!51980611()
krb5_64.dll!519766cb()
krb5_64.dll!519670ff()
gssapi64.dll!51bb1839()
gssapi64.dll!51bb48e4()
gssapi64.dll!51bb4575()
gssapi64.dll!51b993df()
libpq.dll!pqsecure_open_gss(pg_conn * conn) Line 632
at
c:\users\dpage\downloads\postgresql\src\interfaces\libpq\fe-secure-gssapi.c(632)
libpq.dll!PQconnectPoll(pg_conn * conn) Line 3173
at
c:\users\dpage\downloads\postgresql\src\interfaces\libpq\fe-connect.c(3173)
libpq.dll!connectDBComplete(pg_conn * conn) Line 2187
at
c:\users\dpage\downloads\postgresql\src\interfaces\libpq\fe-connect.c(2187)
libpq.dll!PQconnectdbParams(const char * const * keywords, const char *
const * values, int expand_dbname) Line 655
at
c:\users\dpage\downloads\postgresql\src\interfaces\libpq\fe-connect.c(655)
psql.exe!main(int argc, char * * argv) Line 266
at c:\users\dpage\downloads\postgresql\src\bin\psql\startup.c(266)
[External Code]

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: Kerberos support broken on MSVC builds for Windows x64?

2020-09-03 Thread Dave Page
On Wed, Sep 2, 2020 at 5:21 PM Dave Page  wrote:

>
>
> On Wed, Sep 2, 2020 at 2:47 PM Stephen Frost  wrote:
>
>> Greetings,
>>
>> * Dave Page (dp...@pgadmin.org) wrote:
>> > On Tue, Sep 1, 2020 at 5:29 PM Stephen Frost 
>> wrote:
>> > > * Dave Page (dp...@pgadmin.org) wrote:
>> > > > Attached is a patch against 12.4 for the build system in case anyone
>> > > wants
>> > > > to play (I'll do it properly against the head branch later). I'm
>> guessing
>> > > > this will work for < 12, as with 12 I'm now getting the following
>> which
>> > > > looks like it's related to GSS encryption:
>> > > >
>> > > > "C:\Users\dpage\Downloads\postgresql-12.4\pgsql.sln" (default
>> target)
>> > > (1) ->
>> > > > "C:\Users\dpage\Downloads\postgresql-12.4\pgcrypto.vcxproj" (default
>> > > > target) (2) ->
>> > > > "C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj" (default
>> > > > target) (3) ->
>> > > > (Link target) ->
>> > > >   be-secure-gssapi.obj : error LNK2019: unresolved external symbol
>> setenv
>> > > > referenced in function secure_open_gssapi
>> > > > [C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj]
>> > > >   .\Release\postgres\postgres.exe : fatal error LNK1120: 1
>> unresolved
>> > > > externals
>> [C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj]
>> > > >
>> > > > I'll dig into that some more.
>> > >
>> > > Yes, that'd be in the GSSENC code, which I hadn't been expecting to be
>> > > used under Windows.  If you're successful, I don't have any issue
>> > > helping to make that work, though I'm curious if you're trying to
>> build
>> > > with MIT KfW (which is rather ancient these days, being based on krb5
>> > > 1.13 and not updated since..) or with a more current release...?
>> >
>> > I'm currently using the KFW 4.1 build from MIT. I've tried building it
>> > myself but it requires a very old toolchain (which defeated the point of
>> > what I was trying to do at the time).
>>
>> > I haven't yet looked to see if the source for krb5-1.8.2 will build or
>> even
>> > has the right bits in it for Windows - as I'm sure you know MIT seem to
>> > maintain an entirely different version for Windows for which I assume
>> > there's a reason.
>>
>> I'm a bit confused as to why you'd consider trying 1.8.2- did you mean
>> 1.18.2 there, perhaps..?
>
>
> Yes, typo.
>
>
>> That's what I would think to try, since, as I
>> understand it from following the Kerberos Dev list (which is pretty
>> responsive...) has been updated to work with newer Windows build
>> toolchains.
>>
>
> OK, will try to do that tomorrow.
>
> Thanks!
>

OK, so 1.18.2 builds OK. It's a bit of a faff, but nothing major. It seems
to work fine as a standalone set of tools.

Of course, they've changed the installation paths again - they've dropped
the i386 and amd64 parts from the library path :-/

So having rebuilt PostgreSQL against that, I'm now in the situation where
the server never even attempts to get a ticket as far as I can see, and
psql just crashes with nothing more than a useless error in the event log:

Faulting application name: psql.exe, version: 14.0.0.20246, time stamp:
0x5f50e477
Faulting module name: unknown, version: 0.0.0.0, time stamp: 0x
Exception code: 0xc005
Fault offset: 0x
Faulting process id: 0xd10
Faulting application start time: 0x01d681f189a17360
Faulting application path: C:\pg\bin\psql.exe
Faulting module path: unknown
Report Id: eb68d787-1c82-420d-8878-bc0648932a5d
Faulting package full name:
Faulting package-relative application ID:

So I'm going to have to break out the debugger, though I suspect this may
require more effort than I have time for right now.


-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: Kerberos support broken on MSVC builds for Windows x64?

2020-09-02 Thread Dave Page
Hi

On Wed, Sep 2, 2020 at 7:08 PM Stephen Frost  wrote:

> Greetings,
>
> * Dave Page (dp...@pgadmin.org) wrote:
> > On Wed, Sep 2, 2020 at 9:05 AM Dave Page  wrote:
> > >> Yes, that'd be in the GSSENC code, which I hadn't been expecting to be
> > >> used under Windows.
> >
> > Here's a patch to make it build successfully (against head). I believe
> the
> > changes to Solution.pm should be back patched all the way, and the rest
> to
> > 12.
>
> Looks about right..  I might pull out the code from both places that are
> setting that variable into a dedicated function to be used from both
> though.
>
> > Testing however, has been more problematic - I suspect at least partly
> > because of my Kerberos newbie-ness. I have a test server in an Ubuntu VM,
> > which I've used quite successfully to authenticate against another VM
> > running PG 12  on Ubuntu, from both Ubuntu and Windows clients. Using
> that,
> > but with a Windows client running MIT Kerberos I find that getting a
> ticket
> > takes a good 30 seconds or so. Postgres also seems to get it's ticket
> > successfully via the keytab file:
>
> So, from Windows clients that don't have MIT KfW installed, you're able
> to authenticate against PG 12 on Ubuntu using Kerberos, right..?  With
> PG built using SSPI on the client side, I'm guessing?
>

Yes, with the workstation configured to authenticate windows login with
Kerberos (e.g.
https://www.garyhawkins.me.uk/non-domain-mit-kerberos-logins-on-windows-10/)


>
> Kerberos uses reverse DNS to try to check what hostname to use when
> requesting a ticket, I wonder if what you're seeing here is a delay due
> to there not being reverse DNS functional in the environment, perhaps..?
>

Ahh, probably. I'm just using host files on these VMs, but I'll bet I
forgot to add the client to the kdc's file. Will try that tomorrow.


>
> > C:\pg>"c:\Program Files\MIT\Kerberos\bin\klist.exe"
> > Ticket cache: API:Initial default ccache
> > Default principal: dp...@pgadmin.org
> >
> > Valid starting ExpiresService principal
> > 09/02/20 15:06:49  09/03/20 01:06:49  krbtgt/pgadmin@pgadmin.org
> > renew until 09/03/20 15:06:31
> > 09/02/20 15:07:06  09/03/20 01:06:49  postgres/win-ilt1arj8a9c@
> > renew until 09/03/20 15:06:31
> > 09/02/20 15:07:06  09/03/20 01:06:49  postgres/
> win-ilt1arj8...@pgadmin.org
> > renew until 09/03/20 15:06:31
> >
> > However, If I try to login using host + gss in the pg_hba.conf file, I
> then
> > get:
> >
> > C:\pg>bin\psql postgres
> > psql: error: could not connect to server: SSPI continuation error: No
> > credentials are available in the security package
> >  (8009030e)
>
> This is with PG compiled with GSS on the client side and using MIT KfW?
>

Yes.


>
> This particular error from SSPI seems to possibly be coming from the
> constrained delegation system.  While not directly about this issue,
> Microsoft has some documentation about configuring constrained
> delegation (and how to turn it off) here:
>
>
> https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/deploy/Set-up-hosts-for-live-migration-without-Failover-Clustering
>
> Now, we aren't actually delegating credentials here, so it seems a bit
> odd for it to be complaining about that, but perhaps it's throwing this
> error because the MIT KfW library has no clue about constrained
> delegation and therefore wouldn't be trying to enforce it.
>

OK, I'll look into that.


>
> > If I try to use hostgssenc + gss, it looks like it's not even trying to
> > encrypt:
> >
> > C:\pg>bin\psql postgres
> > psql: error: could not connect to server: FATAL:  no pg_hba.conf entry
> for
> > host "::1", user "dpage", database "postgres", SSL off
> >
> > Any ideas?
>
> If it's not trying then I would be suspicious that the
> gss_acquire_creds() call is saying that there isn't a credential cache,
> though that would be a bit odd given that klist seems to be working.
>
> Would certainly be interesting to see if 1.18.2 changes anything in this
> regard.
>

I'll let you know how that goes. Thanks for the tips!

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: Kerberos support broken on MSVC builds for Windows x64?

2020-09-02 Thread Dave Page
On Wed, Sep 2, 2020 at 2:47 PM Stephen Frost  wrote:

> Greetings,
>
> * Dave Page (dp...@pgadmin.org) wrote:
> > On Tue, Sep 1, 2020 at 5:29 PM Stephen Frost  wrote:
> > > * Dave Page (dp...@pgadmin.org) wrote:
> > > > Attached is a patch against 12.4 for the build system in case anyone
> > > wants
> > > > to play (I'll do it properly against the head branch later). I'm
> guessing
> > > > this will work for < 12, as with 12 I'm now getting the following
> which
> > > > looks like it's related to GSS encryption:
> > > >
> > > > "C:\Users\dpage\Downloads\postgresql-12.4\pgsql.sln" (default target)
> > > (1) ->
> > > > "C:\Users\dpage\Downloads\postgresql-12.4\pgcrypto.vcxproj" (default
> > > > target) (2) ->
> > > > "C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj" (default
> > > > target) (3) ->
> > > > (Link target) ->
> > > >   be-secure-gssapi.obj : error LNK2019: unresolved external symbol
> setenv
> > > > referenced in function secure_open_gssapi
> > > > [C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj]
> > > >   .\Release\postgres\postgres.exe : fatal error LNK1120: 1 unresolved
> > > > externals [C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj]
> > > >
> > > > I'll dig into that some more.
> > >
> > > Yes, that'd be in the GSSENC code, which I hadn't been expecting to be
> > > used under Windows.  If you're successful, I don't have any issue
> > > helping to make that work, though I'm curious if you're trying to build
> > > with MIT KfW (which is rather ancient these days, being based on krb5
> > > 1.13 and not updated since..) or with a more current release...?
> >
> > I'm currently using the KFW 4.1 build from MIT. I've tried building it
> > myself but it requires a very old toolchain (which defeated the point of
> > what I was trying to do at the time).
>
> > I haven't yet looked to see if the source for krb5-1.8.2 will build or
> even
> > has the right bits in it for Windows - as I'm sure you know MIT seem to
> > maintain an entirely different version for Windows for which I assume
> > there's a reason.
>
> I'm a bit confused as to why you'd consider trying 1.8.2- did you mean
> 1.18.2 there, perhaps..?


Yes, typo.


> That's what I would think to try, since, as I
> understand it from following the Kerberos Dev list (which is pretty
> responsive...) has been updated to work with newer Windows build
> toolchains.
>

OK, will try to do that tomorrow.

Thanks!


>
> > > Of course, it'd be good to get a buildfarm animal in place that's
> > > actually testing this if we're going to make it work.
> >
> > Fixing the config on hamerkop should deal with that I think. Though I am
> > confused as to why the Buildfarm UI thinks it has Kerberos support
> enabled
> > - did we change the config parameter from krb5 to gss some time prior to
> > 9.5? If so, that could explain it.
>
> Looks to be run by SRA OSS..  Perhaps reaching out to them to ask about
> it would help?
>
> > > Regarding the setenv() call, should be able to use pgwin32_putenv() in
> > > place on Windows, I'd think..?
> >
> > Right, I imagine so. It's on my todo...
>
> Alright.
>
> Thanks,
>
> Stephen
>


-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: Kerberos support broken on MSVC builds for Windows x64?

2020-09-02 Thread Dave Page
Hi

On Wed, Sep 2, 2020 at 9:05 AM Dave Page  wrote:

>
>> Yes, that'd be in the GSSENC code, which I hadn't been expecting to be
>> used under Windows.
>
>
Here's a patch to make it build successfully (against head). I believe the
changes to Solution.pm should be back patched all the way, and the rest to
12.

Testing however, has been more problematic - I suspect at least partly
because of my Kerberos newbie-ness. I have a test server in an Ubuntu VM,
which I've used quite successfully to authenticate against another VM
running PG 12  on Ubuntu, from both Ubuntu and Windows clients. Using that,
but with a Windows client running MIT Kerberos I find that getting a ticket
takes a good 30 seconds or so. Postgres also seems to get it's ticket
successfully via the keytab file:

C:\pg>"c:\Program Files\MIT\Kerberos\bin\klist.exe"
Ticket cache: API:Initial default ccache
Default principal: dp...@pgadmin.org

Valid starting ExpiresService principal
09/02/20 15:06:49  09/03/20 01:06:49  krbtgt/pgadmin@pgadmin.org
renew until 09/03/20 15:06:31
09/02/20 15:07:06  09/03/20 01:06:49  postgres/win-ilt1arj8a9c@
renew until 09/03/20 15:06:31
09/02/20 15:07:06  09/03/20 01:06:49  postgres/win-ilt1arj8...@pgadmin.org
renew until 09/03/20 15:06:31

However, If I try to login using host + gss in the pg_hba.conf file, I then
get:

C:\pg>bin\psql postgres
psql: error: could not connect to server: SSPI continuation error: No
credentials are available in the security package
 (8009030e)

If I try to use hostgssenc + gss, it looks like it's not even trying to
encrypt:

C:\pg>bin\psql postgres
psql: error: could not connect to server: FATAL:  no pg_hba.conf entry for
host "::1", user "dpage", database "postgres", SSL off

Any ideas?


-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


msvc64-kerberos-v2.diff
Description: Binary data


Re: Kerberos support broken on MSVC builds for Windows x64?

2020-09-02 Thread Dave Page
Hi

On Tue, Sep 1, 2020 at 5:29 PM Stephen Frost  wrote:

> Greetings,
>
> * Dave Page (dp...@pgadmin.org) wrote:
> > Attached is a patch against 12.4 for the build system in case anyone
> wants
> > to play (I'll do it properly against the head branch later). I'm guessing
> > this will work for < 12, as with 12 I'm now getting the following which
> > looks like it's related to GSS encryption:
> >
> > "C:\Users\dpage\Downloads\postgresql-12.4\pgsql.sln" (default target)
> (1) ->
> > "C:\Users\dpage\Downloads\postgresql-12.4\pgcrypto.vcxproj" (default
> > target) (2) ->
> > "C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj" (default
> > target) (3) ->
> > (Link target) ->
> >   be-secure-gssapi.obj : error LNK2019: unresolved external symbol setenv
> > referenced in function secure_open_gssapi
> > [C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj]
> >   .\Release\postgres\postgres.exe : fatal error LNK1120: 1 unresolved
> > externals [C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj]
> >
> > I'll dig into that some more.
>
> Yes, that'd be in the GSSENC code, which I hadn't been expecting to be
> used under Windows.  If you're successful, I don't have any issue
> helping to make that work, though I'm curious if you're trying to build
> with MIT KfW (which is rather ancient these days, being based on krb5
> 1.13 and not updated since..) or with a more current release...?
>

I'm currently using the KFW 4.1 build from MIT. I've tried building it
myself but it requires a very old toolchain (which defeated the point of
what I was trying to do at the time).

I haven't yet looked to see if the source for krb5-1.8.2 will build or even
has the right bits in it for Windows - as I'm sure you know MIT seem to
maintain an entirely different version for Windows for which I assume
there's a reason.


>
> Of course, it'd be good to get a buildfarm animal in place that's
> actually testing this if we're going to make it work.
>

Fixing the config on hamerkop should deal with that I think. Though I am
confused as to why the Buildfarm UI thinks it has Kerberos support enabled
- did we change the config parameter from krb5 to gss some time prior to
9.5? If so, that could explain it.


>
> Regarding the setenv() call, should be able to use pgwin32_putenv() in
> place on Windows, I'd think..?
>

Right, I imagine so. It's on my todo...


-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: Kerberos support broken on MSVC builds for Windows x64?

2020-09-01 Thread Dave Page
On Tue, Sep 1, 2020 at 4:22 PM Dave Page  wrote:

> I was experimenting with building with MIT Kerberos support on 64 bit
> Windows using MSVC and ran into a number of linker errors along the lines
> of:
>
> "C:\Users\dpage\Downloads\postgresql-12.4\pgsql.sln" (default target) (1)
> ->
> "C:\Users\dpage\Downloads\postgresql-12.4\zic.vcxproj" (default target)
> (2) ->
> (Link target) ->
>   LINK : fatal error LNK1181: cannot open input file
> 'C:\Progra~1\MIT\Kerberos\lib.obj'
> [C:\Users\dpage\Downloads\postgresql-12.4\zic.vcxproj]
>
> That was after I had to manually add the include and lib paths in
> buildenv.pl. Diving in a bit further I found a couple of things:
>
> 1) The only buildfarm machine doing 64bit Windows Kerberos enabled builds
> with MSVC is hammerkop. It enables it by setting the "krb5" option in
> config.pl, however, as far as I can see (going back to 9.5), the option
> is actually "gss". I can't see any sign in the log for the make step that
> it actually is making any attempt to build with Kerberos, despite the UI
> showing the icon for it.
>
> 2) I can't find anything in the MSVC build scripts in src/tools/msvc to
> deal with 64bit Kerberos builds - Solution.pm seems to unconditionally try
> to link with the 32bit libraries (e.g. lib/i386/krb5_32.lib instead of
> lib/amd64/krb5_64.lib).
>
> I'm assuming noone has tried a build with 64bit Kerberos, or am I missing
> something?
>
> Sidenote: I'm not sure even a 32bit Kerberos build will work, as
> Solution.pm assumes the headers are in $self->{options}->{gss} .
> '\inc\krb5', however in at least the latest installer from MIT they're
> actually in $self->{options}->{gss} . '\include'.
>

Attached is a patch against 12.4 for the build system in case anyone wants
to play (I'll do it properly against the head branch later). I'm guessing
this will work for < 12, as with 12 I'm now getting the following which
looks like it's related to GSS encryption:

"C:\Users\dpage\Downloads\postgresql-12.4\pgsql.sln" (default target) (1) ->
"C:\Users\dpage\Downloads\postgresql-12.4\pgcrypto.vcxproj" (default
target) (2) ->
"C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj" (default
target) (3) ->
(Link target) ->
  be-secure-gssapi.obj : error LNK2019: unresolved external symbol setenv
referenced in function secure_open_gssapi
[C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj]
  .\Release\postgres\postgres.exe : fatal error LNK1120: 1 unresolved
externals [C:\Users\dpage\Downloads\postgresql-12.4\postgres.vcxproj]

I'll dig into that some more.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


msvc64-kerberos.diff
Description: Binary data


Kerberos support broken on MSVC builds for Windows x64?

2020-09-01 Thread Dave Page
I was experimenting with building with MIT Kerberos support on 64 bit
Windows using MSVC and ran into a number of linker errors along the lines
of:

"C:\Users\dpage\Downloads\postgresql-12.4\pgsql.sln" (default target) (1) ->
"C:\Users\dpage\Downloads\postgresql-12.4\zic.vcxproj" (default target) (2)
->
(Link target) ->
  LINK : fatal error LNK1181: cannot open input file
'C:\Progra~1\MIT\Kerberos\lib.obj'
[C:\Users\dpage\Downloads\postgresql-12.4\zic.vcxproj]

That was after I had to manually add the include and lib paths in
buildenv.pl. Diving in a bit further I found a couple of things:

1) The only buildfarm machine doing 64bit Windows Kerberos enabled builds
with MSVC is hammerkop. It enables it by setting the "krb5" option in
config.pl, however, as far as I can see (going back to 9.5), the option is
actually "gss". I can't see any sign in the log for the make step that it
actually is making any attempt to build with Kerberos, despite the UI
showing the icon for it.

2) I can't find anything in the MSVC build scripts in src/tools/msvc to
deal with 64bit Kerberos builds - Solution.pm seems to unconditionally try
to link with the 32bit libraries (e.g. lib/i386/krb5_32.lib instead of
lib/amd64/krb5_64.lib).

I'm assuming noone has tried a build with 64bit Kerberos, or am I missing
something?

Sidenote: I'm not sure even a 32bit Kerberos build will work, as
Solution.pm assumes the headers are in $self->{options}->{gss} .
'\inc\krb5', however in at least the latest installer from MIT they're
actually in $self->{options}->{gss} . '\include'.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: EDB builds Postgres 13 with an obsolete ICU version

2020-08-18 Thread Dave Page
On Mon, Aug 17, 2020 at 7:23 PM Bruce Momjian  wrote:

> On Mon, Aug 17, 2020 at 04:55:13PM +0100, Dave Page wrote:
> > That was more if the installer actually handles the whole chain. It
> clearly
> > doesn't today (since it doesn't support upgrades), I agree this might
> > definitely be overkill. But then also I don't really see the problem
> with
> > just putting a new version of ICU in with the newer versions of
> PostgreSQL.
> > That's just puts the user in the same position as they are with any
> other
> > platform wrt manual pg_upgrade runs.
> >
> > Well we can certainly do that if everyone is happy in the knowledge that
> it'll
> > mean pg_upgrade users will need to reindex if they've used ICU
> collations.
> >
> > Sandeep; can you have someone do a test build with the latest ICU please
> (for
> > background, this would be with the Windows and Mac installers)? If noone
> > objects, we can push that into the v13 builds before GA. We'd also need
> to
> > update the README if we do so.
>
> Woh, we don't have any support in pg_upgrade to reindex just indexes
> that use ICU collations, and frankly, if they have to reindex, they
> might decide that they should just do pg_dump/reload of their cluster at
> that point because pg_upgrade is going to be very slow, and they will be
> surprised.


Not necessarily. It's likely that not all indexes use ICU collations, and
you still save time loading what may be large amounts of data.

I agree though, that it *could* be slow.


> I can see a lot more people being disappointed by this than
> will be happy to have Postgres using a newer ICU library.
>

Quite possibly, hence my hesitation to push ahead with anything more than a
simple test build at this time.


>
> Also, is it the ICU library version we should be tracking for reindex,
> or each _collation_ version?  If the later, do we store the collation
> version for each index?
>

I wasn't aware that ICU had the concept of collation versions internally
(which Michael seems to have confirmed downthread). That would potentially
make the number of users needing a reindex even smaller, but as you point
out won't help us for years as we don't store it anyway.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: EDB builds Postgres 13 with an obsolete ICU version

2020-08-17 Thread Dave Page
On Mon, Aug 17, 2020 at 4:14 PM Magnus Hagander  wrote:

>
>
> On Mon, Aug 17, 2020 at 1:44 PM Dave Page  wrote:
>
>>
>>
>> On Mon, Aug 17, 2020 at 11:19 AM Magnus Hagander 
>> wrote:
>>
>>>
>>>
>>> On Fri, Aug 14, 2020 at 3:00 PM Bruce Momjian  wrote:
>>>
>>>> On Tue, Aug 11, 2020 at 02:58:30PM +0200, Magnus Hagander wrote:
>>>> > On Tue, Aug 4, 2020 at 11:42 AM Dave Page  wrote:
>>>> > That would require fairly large changes to the installer to allow
>>>> it to
>>>> > login to the database server (whether that would work would
>>>> be dependent on
>>>> > how pg_hba.conf is configured), and also assumes that the ICU ABI
>>>> hasn't
>>>> > changed between releases. It would also require some hacky
>>>> renaming of
>>>> > DLLs, as they have the version number in them.
>>>> >
>>>> > I assumed it had code for that stuff already. Mainly because I
>>>> assumed it
>>>> > supported doing pg_upgrade, which requires similar things no?
>>>>
>>>
>> No, the installers don't support pg_upgrade directly. They ship it of
>> course, and the user can manually run it, but the installers won't do that,
>> and have no ability to login to a cluster except during the post-initdb
>> phase.
>>
>
> Oh, I just assumed it did :)
>
> If it doesn't, I think shipping with a modern ICU is a much smaller
> problem really...
>
>
> While pg_upgrade requires having the old and new cluster software in
>>>> place, I don't think it helps allowing different ICU versions for each
>>>> cluster.
>>>
>>>
>>> Depends on where they are installed (and disclaimer, I don't know how
>>> the windows installers do that). But as long as the ICU libraries are
>>> installed in separate locations for the two versions, which I *think* they
>>> are or at least used to be, it shouldn't have an effect on this in either
>>> direction.
>>>
>>
>> They are.
>>
>
> Good. So putting both in wouldn't break things.
>
>
>
> That argument really only holds for different versions, not for different
>>> clusters of the same version. But I don't think the installers (natively)
>>> supports multiple clusters of the same version anyway.
>>>
>>
>> They don't. You'd need to manually init a new cluster and register a new
>> server instance. The installer only has any knowledge of the cluster it
>> sets up.
>>
>
> I'd say that's "unsupported enough" to not be a scenario one has to
> consider.
>

Agreed. Plus it's not really any different from running multiple clusters
on other OSs where we're likely to be using a vendor supplied ICU that the
user also couldn't change easily.


>
>
>
>>> The tricky thing is if you want to allow the user to *choose* which ICU
>>> version should be used with postgres version .  Because then the user
>>> might also expect an upgrade-path wherein they only upgrade the icu library
>>> on an existing install...
>>>
>>>
>>>> I guess you can argue that if you know the user is _not_ going
>>>> to be using pg_upgrade, then a new ICU version should be used for the
>>>> new cluster.
>>>>
>>>
>>> Yes, that's exactly the argument I meant :) If the user got the option
>>> to "pick version of ICU: , ", with a comment saying "pick old
>>> only if you plan to do a pg_upgrade based upgrade of a different cluster,
>>> or if this instance should participate in replication with a node using
>>> ", that would probably help for the vast majority of cases. And of
>>> course, if the installer through other options can determine with certainty
>>> that it's going to be running pg_upgrade for the user, then it can reword
>>> the dialog based on that (that is, it should still allow the user to pick
>>> the new version, as long as they know that their indexes are going to need
>>> reindexing)
>>>
>>
>> That seems like a very hacky and extremely user-unfriendly approach. How
>> many users are going to understand options in the installer to deal with
>> that, or want to go decode the ICU filenames on their existing
>> installations (which our installers may not actually know about) to figure
>> out what their current version is?
>>
>
>
> That was more if the installer actually handles the whole chain. It
> c

Re: EDB builds Postgres 13 with an obsolete ICU version

2020-08-17 Thread Dave Page
On Mon, Aug 17, 2020 at 11:19 AM Magnus Hagander 
wrote:

>
>
> On Fri, Aug 14, 2020 at 3:00 PM Bruce Momjian  wrote:
>
>> On Tue, Aug 11, 2020 at 02:58:30PM +0200, Magnus Hagander wrote:
>> > On Tue, Aug 4, 2020 at 11:42 AM Dave Page  wrote:
>> > That would require fairly large changes to the installer to allow
>> it to
>> > login to the database server (whether that would work would
>> be dependent on
>> > how pg_hba.conf is configured), and also assumes that the ICU ABI
>> hasn't
>> > changed between releases. It would also require some hacky renaming
>> of
>> > DLLs, as they have the version number in them.
>> >
>> > I assumed it had code for that stuff already. Mainly because I assumed
>> it
>> > supported doing pg_upgrade, which requires similar things no?
>>
>
No, the installers don't support pg_upgrade directly. They ship it of
course, and the user can manually run it, but the installers won't do that,
and have no ability to login to a cluster except during the post-initdb
phase.


>
>> While pg_upgrade requires having the old and new cluster software in
>> place, I don't think it helps allowing different ICU versions for each
>> cluster.
>
>
> Depends on where they are installed (and disclaimer, I don't know how the
> windows installers do that). But as long as the ICU libraries are installed
> in separate locations for the two versions, which I *think* they are or at
> least used to be, it shouldn't have an effect on this in either direction.
>

They are.


>
> That argument really only holds for different versions, not for different
> clusters of the same version. But I don't think the installers (natively)
> supports multiple clusters of the same version anyway.
>

They don't. You'd need to manually init a new cluster and register a new
server instance. The installer only has any knowledge of the cluster it
sets up.


>
> The tricky thing is if you want to allow the user to *choose* which ICU
> version should be used with postgres version .  Because then the user
> might also expect an upgrade-path wherein they only upgrade the icu library
> on an existing install...
>
>
>> I guess you can argue that if you know the user is _not_ going
>> to be using pg_upgrade, then a new ICU version should be used for the
>> new cluster.
>>
>
> Yes, that's exactly the argument I meant :) If the user got the option to
> "pick version of ICU: , ", with a comment saying "pick old only
> if you plan to do a pg_upgrade based upgrade of a different cluster, or if
> this instance should participate in replication with a node using ",
> that would probably help for the vast majority of cases. And of course, if
> the installer through other options can determine with certainty that it's
> going to be running pg_upgrade for the user, then it can reword the dialog
> based on that (that is, it should still allow the user to pick the new
> version, as long as they know that their indexes are going to need
> reindexing)
>

That seems like a very hacky and extremely user-unfriendly approach. How
many users are going to understand options in the installer to deal with
that, or want to go decode the ICU filenames on their existing
installations (which our installers may not actually know about) to figure
out what their current version is?

I would suggest that the better way to handle this would be for pg_upgrade
to (somehow) check the ICU version on the old and new clusters and if
there's a mismatch perform a reindex of any ICU based indexes. I suspect
that may require that the server exposes the ICU version though. That way,
the installers could freely upgrade the ICU version with a new major
release.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: EDB builds Postgres 13 with an obsolete ICU version

2020-08-04 Thread Dave Page
On Tue, Aug 4, 2020 at 10:29 AM Magnus Hagander  wrote:

> On Tue, Aug 4, 2020 at 10:07 AM Dave Page  wrote:
>
>>
>>
>> On Tue, Aug 4, 2020 at 1:04 AM Bruce Momjian  wrote:
>>
>>> On Mon, Aug  3, 2020 at 08:56:06PM +0200, Daniel Verite wrote:
>>> >  Hi,
>>> >
>>> > As a follow-up to bug #16570 [1] and other previous discussions
>>> > on the mailing-lists, I'm checking out PG13 beta for Windows
>>> > from:
>>> >  https://www.enterprisedb.com/postgresql-early-experience
>>> > and it ships with the same obsolete ICU 53 that was used
>>> > for PG 10,11,12.
>>> > Besides not having the latest Unicode features and fixes, ICU 53
>>> > ignores the BCP 47 tags syntax in collations used as examples
>>> > in Postgres documentation, which leads to confusion and
>>> > false bug reports.
>>> > The current version is ICU 67.
>>> >
>>> > I don't see where the suggestion to upgrade it before the
>>> > next PG release should be addressed but maybe some people on
>>> > this list do know or have the leverage to make it happen?
>>>
>>> Well, you can ask EDB about this, but perhaps the have kept the same ICU
>>> version so indexes will not need to be reindexed.
>>>
>>
>> Correct - updating ICU would mean a reindex is required following any
>> upgrade, major or minor.
>>
>> I would really like to find an acceptable solution to this however as it
>> really would be good to be able to update ICU.
>>
>
> It certainly couldn't and shouldn't be done in a minor.
>
> But doing so in v13 doesn't seem entirely unreasonable, especially given
> that I believe we will detect the requirement to reindex thanks to the
> versioning, and not just start returning invalid results (like, say, with
> those glibc updates).
>
> Would it be possible to have the installer even check if there are any icu
> indexes in the database. If there aren't, just put in the new version of
> icu. If there are, give the user a choice of the old version or new version
> and reindex?
>

That would require fairly large changes to the installer to allow it to
login to the database server (whether that would work would be dependent on
how pg_hba.conf is configured), and also assumes that the ICU ABI hasn't
changed between releases. It would also require some hacky renaming of
DLLs, as they have the version number in them.

The chances of designing, building and testing that thoroughly before v13
is released is about zero I'd say.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: EDB builds Postgres 13 with an obsolete ICU version

2020-08-04 Thread Dave Page
On Tue, Aug 4, 2020 at 1:04 AM Bruce Momjian  wrote:

> On Mon, Aug  3, 2020 at 08:56:06PM +0200, Daniel Verite wrote:
> >  Hi,
> >
> > As a follow-up to bug #16570 [1] and other previous discussions
> > on the mailing-lists, I'm checking out PG13 beta for Windows
> > from:
> >  https://www.enterprisedb.com/postgresql-early-experience
> > and it ships with the same obsolete ICU 53 that was used
> > for PG 10,11,12.
> > Besides not having the latest Unicode features and fixes, ICU 53
> > ignores the BCP 47 tags syntax in collations used as examples
> > in Postgres documentation, which leads to confusion and
> > false bug reports.
> > The current version is ICU 67.
> >
> > I don't see where the suggestion to upgrade it before the
> > next PG release should be addressed but maybe some people on
> > this list do know or have the leverage to make it happen?
>
> Well, you can ask EDB about this, but perhaps the have kept the same ICU
> version so indexes will not need to be reindexed.
>

Correct - updating ICU would mean a reindex is required following any
upgrade, major or minor.

I would really like to find an acceptable solution to this however as it
really would be good to be able to update ICU.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: Display individual query in pg_stat_activity

2020-07-27 Thread Dave Page
On Mon, Jul 27, 2020 at 4:28 PM Jeremy Schneider 
wrote:

> On 7/27/20 07:57, Dave Page wrote:
>
> I'm not sure I'd want that to happen, as it could make it much harder to
> track the activity back to a query in the application layer or server logs.
>
> Perhaps a separate field could be added for the current statement, or a
> value to indicate what the current statement number in the query is?
>
>
> Might be helpful to give some specifics about circumstances where strings
> can appear in pg_stat_activity.query with multiple statements.
>
> 1) First of all, IIUC multiple statements are only supported in the first
> place by the simple protocol and PLs.  Anyone using parameterized
> statements (bind variables) should be unaffected by this.
>
> 2) My read of the official pg JDBC driver is that even for batch
> operations it currently iterates and sends each statement individually. I
> don't think the JDBC driver has the capability to send multiple statements,
> so java apps using this driver should be unaffected.
>

That is just one of a number of different popular drivers of course.


>
> 3) psql -c will always send the string as a single "simple protocol"
> request.  Scripts will be impacted.
>
> 4) PLs also seem to have a code path that can put multiple statements in
> pg_stat_activity when parallel slaves are launched.  PL code will be
> impacted.
>
> 5) pgAdmin uses the simple protocol and when a user executes a block of
> statements, pgAdmin seems to send the whole block as a single "simple
> protocol" request.  Tools like pgAdmin will be impacted.
>

It does. It also prepends some queries with comments, specifically to allow
users to filter them out when they're analysing logs (a feature requested
by users, not just something we thought was a good idea). I'm assuming that
this patch would also strip those?


>
> At the application layer, it doesn't seem problematic to me if PostgreSQL
> reports each query one at a time.  IMO most people will find this to be a
> more useful behavior and they will still find their queries in their app
> code or app logs.
>

I think there are arguments to be made for both approaches.


>
> However at the PostgreSQL logging layer this is a good call-out.  I just
> did a quick test on 14devel to double-check my assumption and it does seem
> that PostgreSQL logs the entire combined query for psql -c.  I think it
> would be better for PostgreSQL to report queries individually in the log
> too - for example pgBadger summaries will be even more useful if they
> report information for each individual query rather than a single big block
> of multiple queries.
>
> Given how small this patch is, it seems worthwhile to at least investigate
> whether the logging component could be addressed just as easily.
>
> -Jeremy
>
> --
> Jeremy Schneider
> Database Engineer
> Amazon Web Services
>
>
>

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: Display individual query in pg_stat_activity

2020-07-27 Thread Dave Page
Hi

On Mon, Jul 27, 2020 at 3:40 PM Drouvot, Bertrand 
wrote:

> Hi hackers,
>
> I've attached a patch to display individual query in the pg_stat_activity
> query field when multiple SQL statements are currently displayed.
>
> *Motivation:*
>
> When multiple statements are displayed then we don’t know which one is
> currently running.
>

I'm not sure I'd want that to happen, as it could make it much harder to
track the activity back to a query in the application layer or server logs.

Perhaps a separate field could be added for the current statement, or a
value to indicate what the current statement number in the query is?

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: http://www.enterprisedb.com


Re: New 'pg' consolidated metacommand patch

2020-05-27 Thread Dave Page
On Wed, May 27, 2020 at 3:00 PM Mark Dilger 
wrote:

>
>
> > On May 26, 2020, at 4:59 PM, David G. Johnston <
> david.g.johns...@gmail.com> wrote:
> >
> > On Tue, May 26, 2020 at 4:19 PM Mark Dilger <
> mark.dil...@enterprisedb.com> wrote:
> > I'd also appreciate +1 and -1 votes on the overall idea, in case this
> entire feature, regardless of implementation, is simply something the
> community does not want.
> >
> > -1, at least as part of core.  My question would be how much of this is
> would be needed if someone were to create an external project that
> installed a "pg" command on top of an existing PostgreSQL installation.  Or
> put differently, how many of the changes to the existing binaries are
> required versus nice-to-have?
>
> If the only goal of something like this were to have a frontend that could
> execute the various postgres binaries, then I'd say no changes to those
> binaries would be needed, and the frontend would not be worth very much.
> The value in having the frontend is that it makes it less difficult to
> introduce new commands to the postgres suite of commands, as you don't need
> to worry about whether another executable by the same name might happen to
> already exist somewhere.  Even introducing a command named "pg" has already
> gotten such a response on this thread.  By having the commands installed in
> postgres's libexec rather than bin, you can put whatever commands you want
> in libexec without worrying about conflicts.  That still leaves open the
> question of whether existing commands get moved into libexec, and if so, if
> they keep the same name.  An external project for this would be worthless
> in this regard, as the community wouldn't get any benefit when debating the
> merits of introducing a new command vs. the potential for conflicts.
>

The issue you raise can almost certainly be resolved simply by prefixing
pg- or something similar on all the existing binary names.

I think the beauty of having a single CLI executable is that we can
redesign the user interface to make it nice and consistent for all the
different functions it offers, and to cleanup old cruft such as createuser
vs. createrole and pgbench vs. pg_* and so on.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: New 'pg' consolidated metacommand patch

2020-05-27 Thread Dave Page
Hi

On Wed, May 27, 2020 at 12:19 AM Mark Dilger 
wrote:

>
> I think it makes sense that packagers could put the LIBEXECDIR in the PATH
> so that 3rd-party scripts which call pg_ctl, initdb, etc. continue to
> work.


Having packages that futz with the PATH is generally a bad idea, especially
those that support side-by-side installations of different versions. None
of ours (EDBs) will be doing so.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: PG14 target version?

2020-03-04 Thread Dave Page
On Wed, Mar 4, 2020 at 9:14 AM Daniel Gustafsson  wrote:

> > On 4 Mar 2020, at 06:28, Michael Paquier  wrote:
> >
> > On Tue, Mar 03, 2020 at 01:10:26PM -0500, David Steele wrote:
> >> Anybody know how to add 14 to the "Target version" dropdown in the CF
> app?
> >
> > The only person knowing that stuff is I think Magnus.  I don't have an
> > access to that.
>
> Magnus, or someone else on the infra team since it requires an update to
> the
> database. Looping in -www for visibility.
>

Hmm, I just tried to login to the admin site to do this and got a 500
error. Unfortunately I'm now off to take a number of meetings, so can't
look more closely myself.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: PG v12.2 - Setting jit_above_cost is causing the server to crash

2020-02-27 Thread Dave Page
Hi

On Thu, Feb 27, 2020 at 12:41 PM Tom Lane  wrote:

> Aditya Toshniwal  writes:
> > On Mon, Feb 24, 2020 at 12:46 PM Andres Freund 
> wrote:
> >> This isn't reproducible here. Are you sure that you're running on a
> >> clean installation?
>
> > Yes I did a fresh installation using installer provided here -
> > https://www.enterprisedb.com/downloads/postgresql
>
> There is apparently something wrong with the JIT stuff in EDB's 12.2
> build for macOS.  At least, that's the conclusion I came to after
> off-list discussion with the submitter of bug #16264, which has pretty
> much exactly this symptom (especially if you're seeing "signal 9"
> reports in the postmaster log).  For him, either disabling JIT or
> reverting to 12.1 made it go away.
>

We've been looking into this;

Apple started a notarisation process some time ago, designed to mark their
applications as conforming to various security requirements, but prior to
Catalina it was essentially optional. When Catalina was released, they made
notarisation for distributed software a requirement, but had the process
issue warnings for non-compliance. As-of the end of January, those warnings
became hard errors, so now our packages must be notarised, and for that to
happen, must be hardened by linking with a special runtime and having
securely time stamped signatures on every binary before being checked and
notarised as such by Apple. Without that, users would have to disable
security features on their systems before they could run our software.

Our packages are being successfully notarised at the moment, because that's
essentially done through a static analysis. We can (and have) added what
Apple call an entitlement in test builds which essentially puts a flag in
the notarisation for the product that declares that it will do JIT
operations, however, it seems that this alone is not enough and that in
addition to the entitlement, we also need to include the MAP_JIT flag in
mmap() calls. See
https://developer.apple.com/documentation/security/hardened_runtime and
https://developer.apple.com/documentation/bundleresources/entitlements/com_apple_security_cs_allow-jit

We're working on trying to test a patch for that at the moment.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: GSoC proposal for pgAdmin 4 bytea support

2019-04-01 Thread Dave Page
Hi

On Mon, Apr 1, 2019 at 3:12 AM Haoran Yu  wrote:

> Dear PostgreSQL community,
>
> I have submitted a proposal for the project pgAdmin 4 bytea support. The
> project discusses storing media content (images, audio, video) as bytea.
> However, I have a quick question. What does bytea data look like typically
> when storing media content? What I had in mind is, media contents that uses
> MIME type, which are rendered as part of HTML. For example, the following
> is rendered as a red dot:
>
> 'data:image/png;base64,iVBORw0KGgoNSUhEUgUA
> AAAFCAYAAACNbyblHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO
> 9TXL0Y4OHwBJRU5ErkJggg==’
>
> This string is decoded to bytea, and I stored it in a bytea column.
>
> What are some other examples of using bytea to store media content, not
> necessarily using the MIME type? Is there a way to detect the type of these
> media (audio, image) stored in bytea?
>

When I have stored small media items in bytea columns in the past, I just
stored the data. I vaguely recall I did store the mime type in another
column, but that may not be the case in all scenarios (e.g. when a system
is designed to store only PNGs). I think you should assume it's raw data
only, and try to determine the file type by examining the data;

e.g

PNG files have an 8 byte signature:
http://www.libpng.org/pub/png/spec/1.2/PNG-Structure.html
MPEG files have identifying information in the frame header that you may be
able to use: http://mpgedit.org/mpgedit/mpeg_format/MP3Format.html
JPEG images have identifying markers:
https://en.wikipedia.org/wiki/JPEG_File_Interchange_Format

etc.


> Another question I had is, I read that there are performance-related
> issues for storing media in bytea. Are there practical ways to store bytea
> data that does not face performance-related issues? For example, storing
> large media content using multiple bytea parts, and reassembling them
> together once retrieved from the database?
>

Not that I'm aware of. For larger objects, most people store them
externally (which of course loses ACID properties). There are certainly
applications for storing smaller objects directly in the database though -
and some folks have done work in the past with index types and
operators/functions for finding and comparing images for example, so there
are also benefits other than ACID to storing data in this way.

BTW; for pgAdmin related GSoC questions, you'd do better to ask on
pgadmin-hack...@postgresql.org.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Should we still have old release notes in docs?

2019-02-12 Thread Dave Page



> On 12 Feb 2019, at 16:00, Greg Stark  wrote:
> 
> I was just perusing our PDF docs for the first time in ages and
> realized that of the 3,400+ pages of docs there's about 1,000 pages of
> release notes in it That seems like a bit overkill.
> 
> I love having the old release notes online but perhaps they can be
> somewhere other than the main docs? We could limit the current docs to
> including the release notes for just the supported versions -- after
> all you can always get the old release notes in the old docs
> themselves

+1. It does seem excessive.



Re: phase out ossp-uuid?

2019-02-07 Thread Dave Page
On Thu, Feb 7, 2019 at 8:26 AM Peter Eisentraut
 wrote:
>
> I'm wondering whether we should phase out the use of the ossp-uuid
> library? (not the uuid-ossp extension)  We have had preferred
> alternatives for a while now, so it shouldn't be necessary to use this
> anymore, except perhaps in some marginal circumstances?  As we know,
> ossp-uuid isn't maintained anymore, and a few weeks ago the website was
> gone altogether, but it seems to be back now.
>
> I suggest we declare it deprecated in PG12 and remove it altogether in PG13.

Much as I'd like to get rid of it, we don't have an alternative for
Windows do we? The docs for 11 imply it's required for UUID support
(though the wording isn't exactly clear, saying it's required for
UUID-OSSP support!):
https://www.postgresql.org/docs/11/install-windows-full.html#id-1.6.4.8.8

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Code of Conduct

2018-09-18 Thread Dave Page
The PostgreSQL Core team are pleased to announce that following a long
consultation process, the project’s Code of Conduct (CoC) has now been
finalised and published at https://www.postgresql.org/about/policies/coc/.

Please take time to read and understand the CoC, which is intended to
ensure that PostgreSQL remains an open and enjoyable project for anyone to
join and participate in.

A Code of Conduct Committee has been formed to handle any complaints. This
consists of the following volunteers:

- Stacey Haysler (Chair)
- Lætitia Avrot
- Vik Fearing
- Jonathan Katz
- Ilya Kosmodemiansky

We would like to extend our thanks and gratitude to Stacey Haysler for her
patience and expertise in helping develop the Code of Conduct, forming the
committee and guiding the work to completion.

-- 
Dave Page
PostgreSQL Core Team
http://www.postgresql.org/
<http://www.postgresql.org/>


Re: Code of Conduct plan

2018-09-14 Thread Dave Page
On Fri, Sep 14, 2018 at 3:28 PM, Adrian Klaver 
wrote:

> On 9/14/18 7:19 AM, Dave Page wrote:
>
>>
>>
>>
>
>> No one is tracking anything as part of the CoC. That's nothing but a
>> straw man argument.
>>
>
> Not buying it or the below is null and void:
>
> "This Code is meant to cover all interaction between community members,
> whether or not it takes place within postgresql.org infrastructure, so
> long as there is not another Code of Conduct that takes precedence (such as
> a conference's Code of Conduct)."
>
> Not sure how the above can be enforced without someone reporting on what
> is said outside the 'postgresql.org infrastructure'?
>
> At any rate, whether I like it or not the CoC is here to stay. I just feel
> a dissenting opinion is important to the conversation.


I can report someone who steal my wallet to the police. That doesn't mean I
track pick-pockets activity.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Code of Conduct plan

2018-09-14 Thread Dave Page
On Fri, Sep 14, 2018 at 3:57 PM, James Keener  wrote:

>
>
>> Yes, I believe so. Isn't that what "To that end, we have established
>> this Code of Conduct for community interaction and participation in the
>> project’s work and the community at large." basically says?
>>
>
> No? What's the "community at large"? To me that sounds like "all
> interactions" whether or not they're about postgres.
>

That wording has been in the published draft for 18 months, and noone
objected to it that I'm aware of. There will always be people who don't
like some of the wording, much as there are often people who disagree with
the way a patch to the code is written. Sooner or later though, the general
consensus prevails and we have to move on, otherwise nothing will ever get
completed.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Code of Conduct plan

2018-09-14 Thread Dave Page
On Fri, Sep 14, 2018 at 3:55 PM, James Keener  wrote:

>
>
> Yes. They can. The people who make the majority of the contributions to
>> the software can decide what happens, because without them there is no
>> software. If you want to spend 20 years of your life
>>
>
> So everyone who moderates this group and that will be part of the CoC
> committee will have had to have dedicated their life of pg?
>
> Sure, they own the servers, they make the rules. I get it. I'm not
> entirely opposed to it, even if I think it's silly to ram something down
> the rest of the groups throats.
>
> Jim
>
> PS: Also, what's with the personal replies? If you don't want to say what
> you want to the whole group, I don't really have an interest in talking to
> you personally.
>

I've had one off-list personal reply in this thread... from you :-p

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Code of Conduct plan

2018-09-14 Thread Dave Page
On Fri, Sep 14, 2018 at 3:43 PM, Joshua D. Drake 
wrote:

> On 09/14/2018 07:36 AM, Dave Page wrote:
>
>
>
> On Fri, Sep 14, 2018 at 3:21 PM, James Keener  wrote:
>
>>
>> Now, you may say that (2) would be rejected by the committee, but I would
>>>> counter that it's still a stain on me and something that will forever
>>>> appear
>>>> along side my name in search results and that the amount of time and
>>>> stress it'd take me to defend myself would make my voluntarily leaving
>>>> the community, which would be seen as an admission of guilt, my only
>>>> option.
>>>>
>>>
>>> If you had read the policy, you would know that wouldn't happen as
>>> reports and details of reports are to be kept confidential.
>>>
>>
>> That doesn't mean I won't be strung along and it doesn't mean that the
>> attacker can't release those details. Remember, I'm worried
>> about politically motivated attacks, and attacks meant to silence
>> opposing viewpoints, not legitimate instances of harassment.
>>
>
> Sure, but an attacker can do that now. Having the CoC doesn't change
> anything there, though it does give us a framework to deal with it.
>
>
>>
>>
>>>
>>>
>>>>
>>>> People are shitheads. People are assholes. We're not agreeing to join
>>>> some organization and sign an ethics clause when signing up for the
>>>> mailing
>>>> list.  The current moderators can already remove bad actors from the
>>>> list.
>>>> How they act outside of the list is non of this list's concern.
>>>>
>>>
>>> The lists are just one of many different ways people in this community
>>> interact.
>>>
>>
>> So? We interact with people outside of specific groups all the time.
>> Baring specific
>> agreements to the contrary, why should any one group claim responsibility
>> of my
>> personal business?
>>
>
> If that business is publicly bringing the project into disrepute, or
> harassing other community members and they approach us about it, then it
> becomes our business.
>
> If it's unrelated to PostgreSQL, then it's your personal business and not
> something the project would get involved in.
>
>
> O.k. so this isn't clear (at least to me) within the CoC. I want to make
> sure I understand. You are saying that if a community member posts on
> Twitter that they believe gays are going to hell, reporting that to the CoC
> committee would result in a non-violation UNLESS they referenced postgresql
> within the post?
>

Yes, I believe so. Isn't that what "To that end, we have established this Code
of Conduct for community interaction and participation in the project’s
work and the community at large." basically says?

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Code of Conduct plan

2018-09-14 Thread Dave Page
On Fri, Sep 14, 2018 at 3:41 PM, James Keener  wrote:

> > Community is people who joined it
>
> We're not a "community." We're people using email to get help with or
> discuss technical aspects of PostgreSQL. The types of discussions that
> would normally be held within a "community" would be entirely off-topic
> here.  We should be professional to each other here; we don't need to be
> buddies. There is a clear difference between "professionalism" and
> "community". A document governing interactions on this list is within the
> right of the moderation, but leaking into the "real world" is an
> abomination and perversion of what this group is.
>

To many of us, we absolutely are a community. Remember, there are people
here who have been around for 20+ years, of which many have become close
friends, having started working on PostgreSQL as a hobby. We have always
seen the project as a community of like-minded technologists, and welcome
others that wish to join, whether just to ask a single question or to hang
around for the next 20 years. I do see your viewpoint, but I would counter
that coming here for help (for example) is quite different from calling
tech support at a vendor.


>
> My church group is 100% within their right to kick me out of teaching
> Sunday School if I were to have an affair. Teaching Sunday School is an act
> taking place as part of a community of people with a shared belief and
> culture. My job would 100% not be within their right to fire me for having
> an affair, as it's not a community, but a professional environment and my
> personal life is just that: personal. (Baring an ethics clauses signed when
> joining, I guess?)
>

> Jim
>
>
> On Fri, Sep 14, 2018 at 10:31 AM, Ilya Kosmodemiansky 
> wrote:
>
>>
>>
>> On 14. Sep 2018, at 16:17, Dave Page  wrote:
>>
>>
>> The lists are just one of many different ways people in this community
>> interact.
>>
>>
>> I could only heavily +1 this. I can get from where comes the idea that
>> community is only what happens just on postgresql.org or just on some
>> other channel community uses. Community is people who joined it and CoC
>> supposed to apply even if people use analogue telephones. This is about
>> communication, not about communication channels.
>>
>>
>>
>> --
>> Dave Page
>> Blog: http://pgsnake.blogspot.com
>> Twitter: @pgsnake
>>
>> EnterpriseDB UK: http://www.enterprisedb.com
>> The Enterprise PostgreSQL Company
>>
>>
>


-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Code of Conduct plan

2018-09-14 Thread Dave Page
On Fri, Sep 14, 2018 at 3:37 PM, Joshua D. Drake 
wrote:

> On 09/14/2018 07:14 AM, Dave Page wrote:
>
>
>
> On Fri, Sep 14, 2018 at 3:08 PM, Joshua D. Drake 
> wrote:
>
>> On 09/14/2018 01:31 AM, Chris Travers wrote:
>>
>>
>> I apologize for the glacial slowness with which this has all been moving.
>>> The core team has now agreed to some revisions to the draft CoC based on
>>> the comments in this thread; see
>>>
>>> https://wiki.postgresql.org/wiki/Code_of_Conduct
>>>
>>> (That's the updated text, but you can use the diff tool on the page
>>> history tab to see the changes from the previous draft.)
>>>
>>
>> I really have to object to this addition:
>> "This Code is meant to cover all interaction between community members,
>> whether or not it takes place within postgresql.org infrastructure, so
>> long as there is not another Code of Conduct that takes precedence (such as
>> a conference's Code of Conduct)."
>>
>> That covers things like public twitter messages over live political
>> controversies which might not be personally directed.   At least if one is
>> going to go that route, one ought to *also* include a safe harbor for
>> non-personally-directed discussions of philosophy, social issues, and
>> politics.  Otherwise, I think this is asking for trouble.  See, for
>> example, what happened with Opalgate and how this could be seen to
>> encourage use of this to silence political controversies unrelated to
>> PostgreSQL.
>>
>>
>> I think this is a complicated issue. On the one hand, postgresql.org has
>> no business telling people how to act outside of postgresql.org. Full
>> stop.
>>
>
> I'm going to regret jumping in here, but...
>
> I disagree. If a community member decides to join forums for other
> software and then strongly promotes PostgreSQL to the point that they
> become abusive or offensive to people making other software choices, then
> they are clearly bringing the project into disrepute and we should have
> every right to sanction them by preventing them participating in our
> project in whatever ways are deemed appropriate.
>
>
> We all know that PostgreSQL is the only database we should use and anybody
> using a different one just hasn't been enlightened yet. :P
>
> I think we need to define community member. I absolutely see your point of
> the individual is a contributor but community member is rather ethereal in
> this context don't you think?
>

There are some fuzzy edges I guess (e.g. Slack), but in my mind it's always
been anyone who participates in any of the projects communications channels.


-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Code of Conduct plan

2018-09-14 Thread Dave Page
On Fri, Sep 14, 2018 at 3:08 PM, Joshua D. Drake 
wrote:

> On 09/14/2018 01:31 AM, Chris Travers wrote:
>
>
> I apologize for the glacial slowness with which this has all been moving.
>> The core team has now agreed to some revisions to the draft CoC based on
>> the comments in this thread; see
>>
>> https://wiki.postgresql.org/wiki/Code_of_Conduct
>>
>> (That's the updated text, but you can use the diff tool on the page
>> history tab to see the changes from the previous draft.)
>>
>
> I really have to object to this addition:
> "This Code is meant to cover all interaction between community members,
> whether or not it takes place within postgresql.org infrastructure, so
> long as there is not another Code of Conduct that takes precedence (such as
> a conference's Code of Conduct)."
>
> That covers things like public twitter messages over live political
> controversies which might not be personally directed.   At least if one is
> going to go that route, one ought to *also* include a safe harbor for
> non-personally-directed discussions of philosophy, social issues, and
> politics.  Otherwise, I think this is asking for trouble.  See, for
> example, what happened with Opalgate and how this could be seen to
> encourage use of this to silence political controversies unrelated to
> PostgreSQL.
>
>
> I think this is a complicated issue. On the one hand, postgresql.org has
> no business telling people how to act outside of postgresql.org. Full
> stop.
>

I'm going to regret jumping in here, but...

I disagree. If a community member decides to join forums for other software
and then strongly promotes PostgreSQL to the point that they become abusive
or offensive to people making other software choices, then they are clearly
bringing the project into disrepute and we should have every right to
sanction them by preventing them participating in our project in whatever
ways are deemed appropriate.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Code of Conduct plan

2018-09-14 Thread Dave Page
On Fri, Sep 14, 2018 at 3:21 PM, James Keener  wrote:

>
> Now, you may say that (2) would be rejected by the committee, but I would
>>> counter that it's still a stain on me and something that will forever
>>> appear
>>> along side my name in search results and that the amount of time and
>>> stress it'd take me to defend myself would make my voluntarily leaving
>>> the community, which would be seen as an admission of guilt, my only
>>> option.
>>>
>>
>> If you had read the policy, you would know that wouldn't happen as
>> reports and details of reports are to be kept confidential.
>>
>
> That doesn't mean I won't be strung along and it doesn't mean that the
> attacker can't release those details. Remember, I'm worried
> about politically motivated attacks, and attacks meant to silence opposing
> viewpoints, not legitimate instances of harassment.
>

Sure, but an attacker can do that now. Having the CoC doesn't change
anything there, though it does give us a framework to deal with it.


>
>
>>
>>
>>>
>>> People are shitheads. People are assholes. We're not agreeing to join
>>> some organization and sign an ethics clause when signing up for the
>>> mailing
>>> list.  The current moderators can already remove bad actors from the
>>> list.
>>> How they act outside of the list is non of this list's concern.
>>>
>>
>> The lists are just one of many different ways people in this community
>> interact.
>>
>
> So? We interact with people outside of specific groups all the time.
> Baring specific
> agreements to the contrary, why should any one group claim responsibility
> of my
> personal business?
>

If that business is publicly bringing the project into disrepute, or
harassing other community members and they approach us about it, then it
becomes our business.

If it's unrelated to PostgreSQL, then it's your personal business and not
something the project would get involved in.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Code of Conduct plan

2018-09-14 Thread Dave Page
On Fri, Sep 14, 2018 at 3:13 PM, Adrian Klaver 
wrote:

> On 9/14/18 6:59 AM, Robert Eckhardt wrote:
>
>> On Fri, Sep 14, 2018 at 9:41 AM, Adrian Klaver
>>  wrote:
>>
>>> On 9/14/18 1:31 AM, Chris Travers wrote:
>>>
>>
>
>>>> I really have to object to this addition:
>>>> "This Code is meant to cover all interaction between community members,
>>>> whether or not it takes place within postgresql.org <
>>>> http://postgresql.org>
>>>> infrastructure, so long as there is not another Code of Conduct that
>>>> takes
>>>> precedence (such as a conference's Code of Conduct)."
>>>>
>>>
>>>
>>> I second that objection. It is not in PGDG's remit to cure the world, for
>>> whatever form of cure you ascribe to. This is especially true as
>>> 'community
>>> member' has no strict definition.
>>>
>>
>> I understand the concern, however, if you look at how attacks happen
>> it is frequently through other sites. Specifically under/poorly
>> moderated sites. For specific examples, people who have issues with
>> people on Quora will frequently go after them on Facebook and Twitter.
>>
>> these aren't a solution looking for a problem. If we just want to look
>> at the clusterfuck that is happening in the reddis community right now
>> we can see conversations spilling onto twitter and into ad hominem
>> vitriol.
>>
>
> Ask yourself, if this was a government agency tracking your speech across
> platforms would you be as approving? Personally I find the whole thing
> creepy.


No one is tracking anything as part of the CoC. That's nothing but a straw
man argument.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Code of Conduct plan

2018-09-14 Thread Dave Page
On Fri, Sep 14, 2018 at 3:10 PM, James Keener  wrote:

> I understand the concern, however, if you look at how attacks happen
>
>> it is frequently through other sites. Specifically under/poorly
>> moderated sites. For specific examples, people who have issues with
>> people on Quora will frequently go after them on Facebook and Twitter.
>>
>> these aren't a solution looking for a problem. If we just want to look
>> at the clusterfuck that is happening in the reddis community right now
>> we can see conversations spilling onto twitter and into ad hominem
>> vitriol.
>>
>
> You haven't established that this is both 1) the PG mailing list's problem
> and that 2) this can't and won't be used to retaliate against those holding
> unpopular viewpoints but aren't specifically harassing anyone.
>
> Now, you may say that (2) would be rejected by the committee, but I would
> counter that it's still a stain on me and something that will forever
> appear
> along side my name in search results and that the amount of time and
> stress it'd take me to defend myself would make my voluntarily leaving
> the community, which would be seen as an admission of guilt, my only
> option.
>

If you had read the policy, you would know that wouldn't happen as reports
and details of reports are to be kept confidential.


>
> People are shitheads. People are assholes. We're not agreeing to join
> some organization and sign an ethics clause when signing up for the mailing
> list.  The current moderators can already remove bad actors from the list.
> How they act outside of the list is non of this list's concern.
>

The lists are just one of many different ways people in this community
interact.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: How can we submit code patches that implement our (pending) patents?

2018-07-24 Thread Dave Page
On Tue, Jul 24, 2018 at 4:26 PM, Tomas Vondra 
wrote:

> On 07/24/2018 05:20 PM, Alvaro Herrera wrote:
>
>> On 2018-Jul-24, Dave Page wrote:
>>
>> tldr; it's a crap ton of work, risk and uncertainty for what might well be
>>> zero benefit at the moment.
>>>
>>
>> Probably easiest way forward is to state the requirement and have
>> someone untainted by the patent come up with a clean-room
>> re-implementation.
>>
>>
> Clean room design addresses copyright-related issues, not patents.
>

Correct. It's important folks realise that!

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: How can we submit code patches that implement our (pending) patents?

2018-07-24 Thread Dave Page
On Mon, Jul 23, 2018 at 8:12 PM, Joshua D. Drake 
wrote:

> On 07/23/2018 12:06 PM, Bruce Momjian wrote:
>
>> So, is it FUD?  The core needs paid-for legal advice, not speculation.
>>>
>>> I'm quite certain that a software license can make a patent grant to the
>>> satisfaction of many open source communities, and almost certainly to
>>> the satisfaction of the PG community.  But it will take an IP lawyer to
>>> review or write such a license.
>>>
>> And is the payback worth it?  Many don't think so.
>>
>
> Although Nico is correct, I also think we need to consider what the
> community wants here. Historically, we have always explicitly avoided
> anything to do with patents to the point where some hackers won't even read
> white papers on patented methods. I do think there is a definite
> technological advantage for PostgreSQL if there was a license that core
> could accept that was patent friendly but frankly, I don't think that core
> or the community has the desire to work through the cost of doing so.


Exactly. There would be a ton of work to do, lawyers to involve (which
inevitably means lots of fun work for me :-) ), uncertainty for forks,
likely changes to a licence many of us hold dear, various potential risks
to the project, both in the short and long term - and in return for what?
One or more patches that we currently know nothing about, have no idea of
the benefit of, that if posted now would likely not get any eyes on them at
all precisely because they're covered by a patent.

tldr; it's a crap ton of work, risk and uncertainty for what might well be
zero benefit at the moment.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: How can we submit code patches that implement our (pending) patents?

2018-07-11 Thread Dave Page
On Wed, Jul 11, 2018 at 1:34 AM, Tsunakawa, Takayuki <
tsunakawa.ta...@jp.fujitsu.com> wrote:

> From: Dave Page [mailto:dp...@pgadmin.org]
> > SFLC have acted as the projects counsel in the past, so I'm not surprised
> > they aren't talking to you; you won't be a known contact to them as a PG
> > contributor, and as a Fujitsu employee there would likely be a conflict
> > of interest for them to talk to you.
>
> I see.  Then I hope for some reply saying so so that I can give up my hope
> that I might get a good idea from them...
>
> Who is a known contact to SFLC in PostgreSQL community?  Can we expect
> response from SFLC if he/she contacts them?
>

I am - however as you've seen from Tom, the core team has already discussed
this and come to the conclusion that the risks and downsides to accepting
code under patent far outweighs the benefitsm so I won't be contacting them
about this.

We do thank you (and Fujitsu) for your efforts though.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: How can we submit code patches that implement our (pending) patents?

2018-07-10 Thread Dave Page
Hi

On Tue, Jul 10, 2018 at 9:29 AM, Tsunakawa, Takayuki <
tsunakawa.ta...@jp.fujitsu.com> wrote:

> From: Markus Wanner [mailto:markus.wan...@2ndquadrant.com]
> > equally sure there are well intended ones as well. For example, I'd
> > expect patent pools (including the Open Invention Network, cited by the
> > OP) to hire non-IANAL personnel who know Legalese well enough to setup
> > valid contracts (between participating companies).
>
> I think I'll consult Open Invention Network on this issue, since I haven't
> received any reply from SFLC.
>

SFLC have acted as the projects counsel in the past, so I'm not surprised
they aren't talking to you; you won't be a known contact to them as a PG
contributor, and as a Fujitsu employee there would likely be a conflict of
interest for them to talk to you.


>
> > I certainly like the (future) patent holder coming forth to offer a
> > grant a lot better than the one who doesn't (but still holds the
> > patent). I'm missing the appreciation for that former strategy in this
> > thread and fear we're setting a precedent for the latter one, instead.
>
> Me too.
>
>
> Regards
> Takayuki Tsunakawa
>
>


-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: pgAdmin4 Docker behind load balancer

2018-05-23 Thread Dave Page
On Tue, May 22, 2018 at 8:09 PM, Daniel Gustafsson <dan...@yesql.se> wrote:

> > On 22 May 2018, at 18:07, Lenain <lena...@gmail.com> wrote:
> >
> > Hello hackers,
> >
> > We are currently using the dpage/pgadmin4 image to run a pgAdmin4 web
> interface behind an AWS application load balancer.
> > The load balancer is configured to check the health of containers by
> querying the /login URI and checking if it answers with a 200 HTTP code.
> >
> > However the app always send a new cookie for this page, storing it into
> the mounted docker volume.
> > It is understandable that it is wanted to generate a new session on
> login, but as load balancers check numerous times a day this URI, it
> quickly fill and use all of the inodes of the volume as it generate session
> tokens, and consequently saturate also the inodes of the underlying system.
> >
> > We are therefore looking for another URI to do our healthcheck that
> won't generate a new session item.
> > However it seems that even on statics assets or redirects, the app set
> the pga4_session cookie.
> >
> > Is there another way available to do these checks ? Am I missing
> something ?
>
> This is the mailinglist for the core postgres database server.  While there
> certainly are lots of people skilled in pgadmin here, you will probably
> have a
> better chance of getting help on the pgadmin-support mailinglist:
>
>     https://www.postgresql.org/list/pgadmin-support/


+1

Though to save on traffic, /misc/ping should give you what you need.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


  1   2   >