Re: releasing space

2019-10-19 Thread Adrian Klaver

On 10/19/19 4:51 PM, Julie Nishimura wrote:

an entire cluster


*From:* Adrian Klaver 
*Sent:* Saturday, October 19, 2019 4:34 PM
*To:* Julie Nishimura ; Tomas Vondra 

*Cc:* pgsql-general@lists.postgresql.org 
; pgsql-general 


*Subject:* Re: releasing space
On 10/19/19 4:17 PM, Julie Nishimura wrote:
Thank you, Thomas. Do you know if it is safe to replicate 9.6.2 
(smaller) db to 9.6.15 (larger capacity) using pg_basebackup? Would it 
be considered as an upgrade?


pg_basebackup backups an entire Postgres cluster which will be many
databases. So when you say db do mean a Postgres cluster or an
individual database?




*From:* Tomas Vondra 
*Sent:* Saturday, October 19, 2019 5:44 AM
*To:* Julie Nishimura 
*Cc:* pgsql-general@lists.postgresql.org 
; pgsql-general 


*Subject:* Re: releasing space
On Thu, Oct 17, 2019 at 05:20:09PM +, Julie Nishimura wrote:

Hello everybody, We are running PostgreSQL 9.6.2 cluster master ->
standby (streaming replication). 22 tb of space (constantly struggling
with the space, pruning the old data, but not fast enough). The biggest
db takes 16 tb. So, we've copied it to another server, and now we would
like to delete it from our original source, to free up the space. What
would be the right approach for this?  Just issue drop database command
(16tb). How long it might take? Should we do it gradually (drop biggest
tables first)? Any suggestions? Caveats?



Generally speaking, DROP DATABASE simply recursively drops all the
various objects - indexes, tables, etc. It mostly just deleting the
files, which should not be very expensive (we certainly don't need to
delete all the data or anything), but there's certain number of I/O
involved. But it does depend on the OS / filesystem / hardware if that's
an issue.

So if you want to be on the safe side, you can drop the objects one by
one, with a bit of delay between them, to throttle the I/O a bit.

FWIW the latest minor release for 9.6 is 9.6.15, you're 13 minor
versions (~30 months) of fixes behind. You might want to consider
upgrading ...


--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



--
Adrian Klaver
adrian.kla...@aklaver.com



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: releasing space

2019-10-19 Thread Tomas Vondra

On Sat, Oct 19, 2019 at 04:34:32PM -0700, Adrian Klaver wrote:

On 10/19/19 4:17 PM, Julie Nishimura wrote:
Thank you, Thomas. Do you know if it is safe to replicate 9.6.2 
(smaller) db to 9.6.15 (larger capacity) using pg_basebackup? Would 
it be considered as an upgrade?


pg_basebackup backups an entire Postgres cluster which will be many 
databases. So when you say db do mean a Postgres cluster or an 
individual database?




My understanding is Julie wants to create a copy of a 9.6.2 cluster
using pg_basebackup and then run 9.6.15 on it. That's OK, it's
essentially a minor version upgrade.

FWIW Julie, please don't top post - it just makes it harder to follow
the discussion. Also, this seems like a completely separate question,
unrelated to the DROP DATABLASE one. It might be better to start a new
thread instead of repurposing an existing one.

regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 





Re: releasing space

2019-10-19 Thread Adrian Klaver

On 10/19/19 4:17 PM, Julie Nishimura wrote:
Thank you, Thomas. Do you know if it is safe to replicate 9.6.2 
(smaller) db to 9.6.15 (larger capacity) using pg_basebackup? Would it 
be considered as an upgrade?


pg_basebackup backups an entire Postgres cluster which will be many 
databases. So when you say db do mean a Postgres cluster or an 
individual database?





*From:* Tomas Vondra 
*Sent:* Saturday, October 19, 2019 5:44 AM
*To:* Julie Nishimura 
*Cc:* pgsql-general@lists.postgresql.org 
; pgsql-general 


*Subject:* Re: releasing space
On Thu, Oct 17, 2019 at 05:20:09PM +, Julie Nishimura wrote:

Hello everybody, We are running PostgreSQL 9.6.2 cluster master ->
standby (streaming replication). 22 tb of space (constantly struggling
with the space, pruning the old data, but not fast enough). The biggest
db takes 16 tb. So, we've copied it to another server, and now we would
like to delete it from our original source, to free up the space. What
would be the right approach for this?  Just issue drop database command
(16tb). How long it might take? Should we do it gradually (drop biggest
tables first)? Any suggestions? Caveats?



Generally speaking, DROP DATABASE simply recursively drops all the
various objects - indexes, tables, etc. It mostly just deleting the
files, which should not be very expensive (we certainly don't need to
delete all the data or anything), but there's certain number of I/O
involved. But it does depend on the OS / filesystem / hardware if that's
an issue.

So if you want to be on the safe side, you can drop the objects one by
one, with a bit of delay between them, to throttle the I/O a bit.

FWIW the latest minor release for 9.6 is 9.6.15, you're 13 minor
versions (~30 months) of fixes behind. You might want to consider
upgrading ...


--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: releasing space

2019-10-19 Thread Julie Nishimura
Thank you, Thomas. Do you know if it is safe to replicate 9.6.2 (smaller) db to 
9.6.15 (larger capacity) using pg_basebackup? Would it be considered as an 
upgrade?


From: Tomas Vondra 
Sent: Saturday, October 19, 2019 5:44 AM
To: Julie Nishimura 
Cc: pgsql-general@lists.postgresql.org ; 
pgsql-general 
Subject: Re: releasing space

On Thu, Oct 17, 2019 at 05:20:09PM +, Julie Nishimura wrote:
>Hello everybody, We are running PostgreSQL 9.6.2 cluster master ->
>standby (streaming replication). 22 tb of space (constantly struggling
>with the space, pruning the old data, but not fast enough). The biggest
>db takes 16 tb. So, we've copied it to another server, and now we would
>like to delete it from our original source, to free up the space. What
>would be the right approach for this?  Just issue drop database command
>(16tb). How long it might take? Should we do it gradually (drop biggest
>tables first)? Any suggestions? Caveats?
>

Generally speaking, DROP DATABASE simply recursively drops all the
various objects - indexes, tables, etc. It mostly just deleting the
files, which should not be very expensive (we certainly don't need to
delete all the data or anything), but there's certain number of I/O
involved. But it does depend on the OS / filesystem / hardware if that's
an issue.

So if you want to be on the safe side, you can drop the objects one by
one, with a bit of delay between them, to throttle the I/O a bit.

FWIW the latest minor release for 9.6 is 9.6.15, you're 13 minor
versions (~30 months) of fixes behind. You might want to consider
upgrading ...


--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: Execute a function through fdw

2019-10-19 Thread Jeff Janes
On Fri, Oct 18, 2019 at 7:55 AM Tom Lane  wrote:

> Guillaume Lelarge  writes:
> > Le ven. 18 oct. 2019 à 11:51, Patrick FICHE 
> a
> > écrit :
> >> Is it possible to execute a function located on a server accessed
> through
> >> Postgres fdw.
>
> > It's probably easier to create a view on the remote server, and access it
> > as a foreign table on the local server.
>

Yes, that would probably work here, but if the function takes user-supplied
arguments, that won't work.

>
> Yeah.  Or if you really want to call a remote function by name, see
> dblink.  postgres_fdw actively avoids doing that sort of thing.
>

And importantly, you can specify the name of the existing postgres_fdw
server to the dblink functions in place of the connection string.  This
removes quite a bit of the drudgery of using dblink, if you are already
using postgres_fdw.

Cheers,

Jeff


Re: jsonb_set() strictness considered harmful to data

2019-10-19 Thread Ariadne Conill
Hello,

On Sat, Oct 19, 2019, 3:27 PM Tomas Vondra 
wrote:

> On Sat, Oct 19, 2019 at 12:47:39PM -0400, Andrew Dunstan wrote:
> >
> >On 10/19/19 12:32 PM, David G. Johnston wrote:
> >> On Sat, Oct 19, 2019 at 9:19 AM Tomas Vondra
> >> mailto:tomas.von...@2ndquadrant.com>>
> >> wrote:
> >>
> >> >
> >> >We invented jsonb_set() (credit to Dmitry Dolgov). And we've had it
> >> >since 9.5. That's five releases ago.  So it's a bit late to be
> >> coming to
> >> >us telling us it's not safe (according to your preconceptions of
> >> what it
> >> >should be doing).
> >> >
> >>
> >>
> >> There have been numerous complaints and questions about this behavior
> >> in those five years; and none of the responses to those defenses has
> >> actually made the current behavior sound beneficial but rather have
> >> simply said "this is how it works, deal with it".
> >
> >
> >I haven't seen a patch, which for most possible solutions should be
> >fairly simple to code. This is open source. Code speaks louder than
> >complaints.
> >
>
> IMHO that might be a bit too harsh - I'm not surprised no one sent a
> patch when we're repeatedly telling people "you're holding it wrong".
> Without a clear consensus what the "correct" behavior is, I wouldn't
> send a patch either.
>
> >
> >>
> >> >
> >> >We could change it prospectively (i.e. from release 13 on) if we
> >> choose.
> >> >But absent an actual bug (i.e. acting contrary to documented
> >> behaviour)
> >> >we do not normally backpatch such changes, especially when there
> is a
> >> >simple workaround for the perceived problem. And it's that policy
> >> that
> >> >is in large measure responsible for Postgres' deserved reputation
> for
> >> >stability.
> >> >
> >>
> >> Yeah.
> >>
> >>
> >> Agreed, this is v13 material if enough people come on board to support
> >> making a change.
> >
> >
> >
> >We have changed such things in the past. But maybe a new function might
> >be a better way to go. I haven't given it enough thought yet.
> >
>
> I think the #1 thing we should certainly do is explaining the behavior
> in the docs.
>
> >
> >
> >>
> >> >And if we were to change it I'm not at all sure that we should do
> >> it the
> >> >way that's suggested here, which strikes me as no more intuitive
> than
> >> >the current behaviour. Rather I think we should possibly fill in
> >> a json
> >> >null in the indicated place.
> >> >
> >>
> >> Not sure, but that seems rather confusing to me, because it's
> >> mixing SQL
> >> NULL and JSON null, i.e. it's not clear to me why
> >>
> >> [...]
> >>
> >> But I admit it's quite subjective.
> >>
> >>
> >> Providing SQL NULL to this function and asking it to do something with
> >> that is indeed subjective - with no obvious reasonable default, and I
> >> agree that "return a NULL" while possible consistent is probably the
> >> least useful behavior that could have been chosen.  We should never
> >> have allowed an SQL NULL to be an acceptable argument in the first
> >> place, and can reasonably safely and effectively prevent it going
> >> forward.  Then people will have to explicitly code what they want to
> >> do if their data and queries present this invalid unknown data to the
> >> function.
> >>
> >>
> >
> >How exactly do we prevent a NULL being passed as an argument? The only
> >thing we could do would be to raise an exception, I think. That seems
> >like a fairly ugly thing to do, I'd need a h3eck of a lot of convincing.
> >
>
> I don't know, but if we don't know what the "right" behavior with NULL
> is, is raising an exception really that ugly?
>

Raising an exception at least would prevent people from blanking their
column out unintentionally.

And I am willing to write a patch to do that if we have consensus on how to
change it.

Ariadne


Re: jsonb_set() strictness considered harmful to data

2019-10-19 Thread Tomas Vondra

On Sat, Oct 19, 2019 at 12:47:39PM -0400, Andrew Dunstan wrote:


On 10/19/19 12:32 PM, David G. Johnston wrote:

On Sat, Oct 19, 2019 at 9:19 AM Tomas Vondra
mailto:tomas.von...@2ndquadrant.com>>
wrote:

>
>We invented jsonb_set() (credit to Dmitry Dolgov). And we've had it
>since 9.5. That's five releases ago.  So it's a bit late to be
coming to
>us telling us it's not safe (according to your preconceptions of
what it
>should be doing).
>


There have been numerous complaints and questions about this behavior
in those five years; and none of the responses to those defenses has
actually made the current behavior sound beneficial but rather have
simply said "this is how it works, deal with it".



I haven't seen a patch, which for most possible solutions should be
fairly simple to code. This is open source. Code speaks louder than
complaints.



IMHO that might be a bit too harsh - I'm not surprised no one sent a
patch when we're repeatedly telling people "you're holding it wrong".
Without a clear consensus what the "correct" behavior is, I wouldn't
send a patch either.





>
>We could change it prospectively (i.e. from release 13 on) if we
choose.
>But absent an actual bug (i.e. acting contrary to documented
behaviour)
>we do not normally backpatch such changes, especially when there is a
>simple workaround for the perceived problem. And it's that policy
that
>is in large measure responsible for Postgres' deserved reputation for
>stability.
>

Yeah.


Agreed, this is v13 material if enough people come on board to support
making a change.




We have changed such things in the past. But maybe a new function might
be a better way to go. I haven't given it enough thought yet.



I think the #1 thing we should certainly do is explaining the behavior
in the docs.






>And if we were to change it I'm not at all sure that we should do
it the
>way that's suggested here, which strikes me as no more intuitive than
>the current behaviour. Rather I think we should possibly fill in
a json
>null in the indicated place.
>

Not sure, but that seems rather confusing to me, because it's
mixing SQL
NULL and JSON null, i.e. it's not clear to me why

[...]

But I admit it's quite subjective.


Providing SQL NULL to this function and asking it to do something with
that is indeed subjective - with no obvious reasonable default, and I
agree that "return a NULL" while possible consistent is probably the
least useful behavior that could have been chosen.  We should never
have allowed an SQL NULL to be an acceptable argument in the first
place, and can reasonably safely and effectively prevent it going
forward.  Then people will have to explicitly code what they want to
do if their data and queries present this invalid unknown data to the
function.




How exactly do we prevent a NULL being passed as an argument? The only
thing we could do would be to raise an exception, I think. That seems
like a fairly ugly thing to do, I'd need a h3eck of a lot of convincing.



I don't know, but if we don't know what the "right" behavior with NULL
is, is raising an exception really that ugly?


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 





Re: Postgres Point in time Recovery (PITR),

2019-10-19 Thread Avinash Kumar
Hi,

On Sat, Oct 19, 2019 at 11:16 PM Daulat Ram 
wrote:

> Hi All,
>
>
>
> Thanks for your suggestions.
>
> One more questions is, how backups are useful if we have streaming
> replication . As I know, we can promote the standby as primary in case of
> disaster at primary side. Do we need to schedule backups if we have
> streaming replication?
>
1. What if you realized that someone has dropped a table or accidentally
made a change that requires you to recover some table/database from old
backups ?
2. Some organizations requires you to store backups for a few days/months
and even years. This is because, you should be able to perform recovery
from past at any given point of time.

Streaming Replication (unless delayed explicitly) applies the changes
immediately. So, it may be very late by the time you realize that some
accidental change has made some damage. Because, the damage has happened on
both Master & Standby.

Regards,
Avinash Vallarapu.

>
>
> Thanks
>
>
>
> *From:* Avinash Kumar 
> *Sent:* Friday, October 18, 2019 5:28 PM
> *To:* David Steele 
> *Cc:* Luca Ferrari ; Andreas Joseph Krogh <
> andr...@visena.com>; Daulat Ram ;
> pgsql-general@lists.postgresql.org
> *Subject:* Re: Postgres Point in time Recovery (PITR),
>
>
>
> Hi Daulat,
>
>
>
> PITR entirely depends on what type of backups you choose.
> Sometimes, to reduce the amount of downtime involved while restoring and
> recovering a backup, you may also use a additional delayed standby.
>
> You could use the PG built-in feature to delay the replication and
> fast-forward it to the safest point to achieve PITR. But this requires you
> to have an additional standby.
>
>
> https://www.percona.com/blog/2018/06/28/faster-point-in-time-recovery-pitr-postgresql-using-delayed-standby/
>
>
>
> If you have several TBs of database, pgBackRest is of course a way to go
> for backups (there are few more open source solutions), but also consider
> the amount of time it takes for recovery. Keeping all of this in mind, your
> approach to PITR changes.
>
>
>
> So i would ask you this question, what is the backup tool you use and what
> is your backup strategy ? Are you taking a physical backup and performing
> continuous archiving of WALs ? The answer to your question entirely depends
> on this. :)
>
>
>
> Regards,
> Avinash Vallarapu.
>
>
>
>
>
>
>
> On Fri, Oct 18, 2019 at 5:17 PM David Steele  wrote:
>
> On 10/18/19 11:29 AM, Luca Ferrari wrote:
> > On Fri, Oct 18, 2019 at 10:30 AM Andreas Joseph Krogh
> >  wrote:
> >> We use barman (https://www.pgbarman.org/) for continuous streaming
> backup and I had to restore from it once, and it went like this:
> >
> > Just for the records, here's an example of restore with pgbackrest:
> >
> > % sudo -u postgres pgbackrest --stanza=miguel \
> > --log-level-console=info --delta restore
> > ...
> > INFO: restore backup set 20190916-125652F
> > INFO: remove invalid files/paths/links from /postgres/pgdata/11
> > INFO: cleanup removed 148 files, 3 paths
> > ...
> > INFO: write /postgres/pgdata/11/recovery.conf
> > INFO: restore global/pg_control (performed last
> >  to ensure aborted restores cannot be started)
> > INFO: restore command end: completed successfully (5113ms)
>
> pgBackRest also has a tutorial on PITR:
> https://pgbackrest.org/user-guide.html#pitr
>
> --
> -David
> da...@pgmasters.net
>
>
>
>
> --
>
> 9000799060
>


-- 
9000799060


Re: Postgres Point in time Recovery (PITR),

2019-10-19 Thread Jeff Janes
On Fri, Oct 18, 2019 at 1:59 AM Daulat Ram 
wrote:

> Hello All,
>
> Can you please share some ideas and scenarios how we can do the PITR in
> case of disaster.
>

It depends on what you mean by "disaster".  Usually I think that would mean
your server (or entire data center) was destroyed.  In this case, you would
want to restore to the latest time available.  I would say that this is not
PITR at all, that is just regular recovery.

If someone truncated a table 3 weeks ago, and you didn't realize it until
today, that is a scenario for PITR.  Are you using "disaster" to cover this
scenario?

Cheers,

Jeff

>


Re: jsonb_set() strictness considered harmful to data

2019-10-19 Thread Adrian Klaver

On 10/18/19 7:18 PM, Ariadne Conill wrote:

Hello,

On Fri, Oct 18, 2019 at 7:04 PM Adrian Klaver  wrote:


On 10/18/19 4:31 PM, Ariadne Conill wrote:

Hello,

On Fri, Oct 18, 2019 at 6:01 PM Adrian Klaver  wrote:


On 10/18/19 3:11 PM, Ariadne Conill wrote:

Hello,

On Fri, Oct 18, 2019 at 5:01 PM David G. Johnston
 wrote:


On Fri, Oct 18, 2019 at 2:50 PM Christoph Moench-Tegeder  
wrote:


## Ariadne Conill (aria...@dereferenced.org):


  update users set info=jsonb_set(info, '{bar}', info->'foo');

Typically, this works nicely, except for cases where evaluating
info->'foo' results in an SQL null being returned.  When that happens,
jsonb_set() returns an SQL null, which then results in data loss.[3]


So why don't you use the facilities of SQL to make sure to only
touch the rows which match the prerequisites?

 UPDATE users SET info = jsonb_set(info, '{bar}', info->'foo')
   WHERE info->'foo' IS NOT NULL;



There are many ways to add code to queries to make working with this function safer - though using 
them presupposes one remembers at the time of writing the query that there is danger and caveats in 
using this function.  I agree that we should have (and now) provided sane defined behavior when one 
of the inputs to the function is null instead blowing off the issue and defining the function as 
being strict.  Whether that is "ignore and return the original object" or "add the 
key with a json null scalar value" is debatable but either is considerably more useful than 
returning SQL NULL.


A great example of how we got burned by this last year: Pleroma
maintains pre-computed counters in JSONB for various types of
activities (posts, followers, followings).  Last year, another counter
was added, with a migration.  But some people did not run the
migration, because they are users, and that's what users do.  This


So you are more forgiving of your misstep, allowing users to run
outdated code, then of running afoul of Postgres documented behavior:


I'm not forgiving of either.


https://www.postgresql.org/docs/11/functions-json.html
" The field/element/path extraction operators return NULL, rather than
failing, if the JSON input does not have the right structure to match
the request; for example if no such element exists"


It is known that the extraction operators return NULL.  The problem
here is jsonb_set() returning NULL when it encounters SQL NULL.


I'm not following. Your original case was:

jsonb_set(info, '{bar}', info->'foo');

where info->'foo' is equivalent to:

test=# select '{"f1":1,"f2":null}'::jsonb ->'f3';
   ?column?
--
   NULL

So you know there is a possibility that a value extraction could return
NULL and from your wrapper that COALESCE is the way to deal with this.


You're not following because you don't want to follow.

It does not matter that info->'foo' is in my example.  That's not what
I am talking about.

What I am talking about is that jsonb_set(..., ..., NULL) returns SQL NULL >
postgres=# \pset null '(null)'
Null display is "(null)".
postgres=# select jsonb_set('{"a":1,"b":2,"c":3}'::jsonb, '{a}', NULL);
jsonb_set
---
(null)
(1 row)

This behaviour is basically giving an application developer a loaded
shotgun and pointing it at their feet.  It is not a good design.  It
is a design which has likely lead to many users experiencing
unintentional data loss.


create table null_test(fld_1 integer, fld_2 integer);

insert into null_test values(1, 2), (3, NULL);

select * from null_test ;
 fld_1 | fld_2
---+---
 1 | 2
 3 |  NULL
(2 rows)

update null_test set fld_1 = fld_1 + fld_2;

select * from null_test ;
 fld_1 | fld_2
---+---
 3 | 2
  NULL |  NULL

Failure to account for NULL is a generic issue. Given that this only the 
second post I can find that deals with this, in going on 4 years, I am 
guessing most users have dealt with it. If you really think this raises 
to the level of a bug then I would suggest filing a report here:


https://www.postgresql.org/account/login/?next=/account/submitbug/





Ariadne




--
Adrian Klaver
adrian.kla...@aklaver.com




RE: Postgres Point in time Recovery (PITR),

2019-10-19 Thread Daulat Ram
Hi All,

Thanks for your suggestions.
One more questions is, how backups are useful if we have streaming replication 
. As I know, we can promote the standby as primary in case of disaster at 
primary side. Do we need to schedule backups if we have streaming replication?

Thanks

From: Avinash Kumar 
Sent: Friday, October 18, 2019 5:28 PM
To: David Steele 
Cc: Luca Ferrari ; Andreas Joseph Krogh 
; Daulat Ram ; 
pgsql-general@lists.postgresql.org
Subject: Re: Postgres Point in time Recovery (PITR),

Hi Daulat,

PITR entirely depends on what type of backups you choose.
Sometimes, to reduce the amount of downtime involved while restoring and 
recovering a backup, you may also use a additional delayed standby.
You could use the PG built-in feature to delay the replication and fast-forward 
it to the safest point to achieve PITR. But this requires you to have an 
additional standby.
https://www.percona.com/blog/2018/06/28/faster-point-in-time-recovery-pitr-postgresql-using-delayed-standby/

If you have several TBs of database, pgBackRest is of course a way to go for 
backups (there are few more open source solutions), but also consider the 
amount of time it takes for recovery. Keeping all of this in mind, your 
approach to PITR changes.

So i would ask you this question, what is the backup tool you use and what is 
your backup strategy ? Are you taking a physical backup and performing 
continuous archiving of WALs ? The answer to your question entirely depends on 
this. :)

Regards,
Avinash Vallarapu.



On Fri, Oct 18, 2019 at 5:17 PM David Steele 
mailto:da...@pgmasters.net>> wrote:
On 10/18/19 11:29 AM, Luca Ferrari wrote:
> On Fri, Oct 18, 2019 at 10:30 AM Andreas Joseph Krogh
> mailto:andr...@visena.com>> wrote:
>> We use barman (https://www.pgbarman.org/) for continuous streaming backup 
>> and I had to restore from it once, and it went like this:
>
> Just for the records, here's an example of restore with pgbackrest:
>
> % sudo -u postgres pgbackrest --stanza=miguel \
> --log-level-console=info --delta restore
> ...
> INFO: restore backup set 20190916-125652F
> INFO: remove invalid files/paths/links from /postgres/pgdata/11
> INFO: cleanup removed 148 files, 3 paths
> ...
> INFO: write /postgres/pgdata/11/recovery.conf
> INFO: restore global/pg_control (performed last
>  to ensure aborted restores cannot be started)
> INFO: restore command end: completed successfully (5113ms)

pgBackRest also has a tutorial on PITR:
https://pgbackrest.org/user-guide.html#pitr

--
-David
da...@pgmasters.net



--
9000799060


Re: jsonb_set() strictness considered harmful to data

2019-10-19 Thread Andrew Dunstan


On 10/19/19 12:32 PM, David G. Johnston wrote:
> On Sat, Oct 19, 2019 at 9:19 AM Tomas Vondra
> mailto:tomas.von...@2ndquadrant.com>>
> wrote:
>
> >
> >We invented jsonb_set() (credit to Dmitry Dolgov). And we've had it
> >since 9.5. That's five releases ago.  So it's a bit late to be
> coming to
> >us telling us it's not safe (according to your preconceptions of
> what it
> >should be doing).
> >
>
>
> There have been numerous complaints and questions about this behavior
> in those five years; and none of the responses to those defenses has
> actually made the current behavior sound beneficial but rather have
> simply said "this is how it works, deal with it".


I haven't seen a patch, which for most possible solutions should be
fairly simple to code. This is open source. Code speaks louder than
complaints.


>
> >
> >We could change it prospectively (i.e. from release 13 on) if we
> choose.
> >But absent an actual bug (i.e. acting contrary to documented
> behaviour)
> >we do not normally backpatch such changes, especially when there is a
> >simple workaround for the perceived problem. And it's that policy
> that
> >is in large measure responsible for Postgres' deserved reputation for
> >stability.
> >
>
> Yeah.
>
>
> Agreed, this is v13 material if enough people come on board to support
> making a change.



We have changed such things in the past. But maybe a new function might
be a better way to go. I haven't given it enough thought yet.



>
> >And if we were to change it I'm not at all sure that we should do
> it the
> >way that's suggested here, which strikes me as no more intuitive than
> >the current behaviour. Rather I think we should possibly fill in
> a json
> >null in the indicated place.
> >
>
> Not sure, but that seems rather confusing to me, because it's
> mixing SQL
> NULL and JSON null, i.e. it's not clear to me why
>
> [...]
>
> But I admit it's quite subjective.
>
>
> Providing SQL NULL to this function and asking it to do something with
> that is indeed subjective - with no obvious reasonable default, and I
> agree that "return a NULL" while possible consistent is probably the
> least useful behavior that could have been chosen.  We should never
> have allowed an SQL NULL to be an acceptable argument in the first
> place, and can reasonably safely and effectively prevent it going
> forward.  Then people will have to explicitly code what they want to
> do if their data and queries present this invalid unknown data to the
> function.
>
>

How exactly do we prevent a NULL being passed as an argument? The only
thing we could do would be to raise an exception, I think. That seems
like a fairly ugly thing to do, I'd need a h3eck of a lot of convincing.


cheers


andrew



-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services





Re: jsonb_set() strictness considered harmful to data

2019-10-19 Thread David G. Johnston
On Sat, Oct 19, 2019 at 9:19 AM Tomas Vondra 
wrote:

> >
> >We invented jsonb_set() (credit to Dmitry Dolgov). And we've had it
> >since 9.5. That's five releases ago.  So it's a bit late to be coming to
> >us telling us it's not safe (according to your preconceptions of what it
> >should be doing).
> >
>

There have been numerous complaints and questions about this behavior in
those five years; and none of the responses to those defenses has actually
made the current behavior sound beneficial but rather have simply said
"this is how it works, deal with it".

>
> >We could change it prospectively (i.e. from release 13 on) if we choose.
> >But absent an actual bug (i.e. acting contrary to documented behaviour)
> >we do not normally backpatch such changes, especially when there is a
> >simple workaround for the perceived problem. And it's that policy that
> >is in large measure responsible for Postgres' deserved reputation for
> >stability.
> >
>
> Yeah.
>
>
Agreed, this is v13 material if enough people come on board to support
making a change.

>
> >And if we were to change it I'm not at all sure that we should do it the
> >way that's suggested here, which strikes me as no more intuitive than
> >the current behaviour. Rather I think we should possibly fill in a json
> >null in the indicated place.
> >
>
> Not sure, but that seems rather confusing to me, because it's mixing SQL
> NULL and JSON null, i.e. it's not clear to me why
>
[...]

> But I admit it's quite subjective.
>

Providing SQL NULL to this function and asking it to do something with that
is indeed subjective - with no obvious reasonable default, and I agree that
"return a NULL" while possible consistent is probably the least useful
behavior that could have been chosen.  We should never have allowed an SQL
NULL to be an acceptable argument in the first place, and can reasonably
safely and effectively prevent it going forward.  Then people will have to
explicitly code what they want to do if their data and queries present this
invalid unknown data to the function.

David J.


Re: jsonb_set() strictness considered harmful to data

2019-10-19 Thread Andrew Dunstan


On 10/19/19 12:18 PM, Tomas Vondra wrote:
> On Sat, Oct 19, 2019 at 11:26:50AM -0400, Andrew Dunstan wrote:
>
> Not sure, but that seems rather confusing to me, because it's mixing SQL
> NULL and JSON null, i.e. it's not clear to me why
>
>    jsonb_set(..., "...", NULL)
>
> should do the same thing as
>
>    jsonb_set(..., "...", 'null':jsonb)
>
> I'm not entirely surprised it's what MySQL does ;-) but I'd say treating
> it as a deletion of the key (just like MSSQL) is somewhat more sensible.
> But I admit it's quite subjective.
>


That's yet another variant, which just reinforces my view that there is
no guaranteed-intuitive behaviour here.


OTOH, to me, turning jsonb_set into jsonb_delete for some case seems ...
odd.


cheers


andrew


-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services





Re: jsonb_set() strictness considered harmful to data

2019-10-19 Thread Tomas Vondra

On Sat, Oct 19, 2019 at 11:21:26AM -0400, Stephen Frost wrote:

Greetings,

* Dmitry Dolgov (9erthali...@gmail.com) wrote:

If we want to change it, the question is where to stop? Essentially we have:

update table set data = some_func(data, some_args_with_null);

where some_func happened to be jsonb_set, but could be any strict function.


I don't think it makes any sense to try and extrapolate this out to
other strict functions.  Functions should be strict when it makes sense
for them to be- in this case, it sounds like it doesn't really make
sense for jsonb_set to be strict, and that's where we stop it.



Yeah. I think the issue here is (partially) that other databases adopted
similar functions after us, but decided to use a different behavior. It
might be more natural for the users, but that does not mean we should
change the other strict functions.

Plus I'm not sure if SQL standard says anything about strict functions
(I found nothing, but I looked only very quickly), but I'm pretty sure
we can't change how basic operators change, and we translate them to
function calls (e.g. 1+2 is int4pl(1,2)).


I wonder if in this case it makes sense to think about an alternative? For
example, there is generic type subscripting patch, that allows to update a
jsonb in the following way:

update table set jsonb_data[key] = 'value';

It doesn't look like a function, so it's not a big deal if it will handle NULL
values differently. And at the same time one can argue, that people, who are
not aware about this caveat with jsonb_set and NULL values, will most likely
use it due to a bit simpler syntax (more similar to some popular programming
languages).


This seems like an entirely independent thing ...



Right. Useful, but entirely separate feature.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 





Re: jsonb_set() strictness considered harmful to data

2019-10-19 Thread Tomas Vondra

On Sat, Oct 19, 2019 at 11:26:50AM -0400, Andrew Dunstan wrote:


...

The hyperbole here is misplaced. There is a difference between a bug and
a POLA violation. This might be the latter, but it isn't the former. So
please tone it down a bit. It's not the function that's unsafe, but the
ill-informed use of it.


We invented jsonb_set() (credit to Dmitry Dolgov). And we've had it
since 9.5. That's five releases ago.  So it's a bit late to be coming to
us telling us it's not safe (according to your preconceptions of what it
should be doing).


We could change it prospectively (i.e. from release 13 on) if we choose.
But absent an actual bug (i.e. acting contrary to documented behaviour)
we do not normally backpatch such changes, especially when there is a
simple workaround for the perceived problem. And it's that policy that
is in large measure responsible for Postgres' deserved reputation for
stability.



Yeah.



Incidentally, why is your function written in plpgsql? Wouldn't a simple
SQL wrapper be better?


   create or replace function safe_jsonb_set
       (target jsonb, path text[], new_value jsonb, create_missing
   boolean default true)
   returns jsonb as
   $func$
       select case when new_value is null then target else
   jsonb_set(target, path, new_value, create_missing) end
   $func$ language sql;


And if we were to change it I'm not at all sure that we should do it the
way that's suggested here, which strikes me as no more intuitive than
the current behaviour. Rather I think we should possibly fill in a json
null in the indicated place.



Not sure, but that seems rather confusing to me, because it's mixing SQL
NULL and JSON null, i.e. it's not clear to me why

   jsonb_set(..., "...", NULL)

should do the same thing as

   jsonb_set(..., "...", 'null':jsonb)

I'm not entirely surprised it's what MySQL does ;-) but I'd say treating
it as a deletion of the key (just like MSSQL) is somewhat more sensible.
But I admit it's quite subjective.

regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 





Re: jsonb_set() strictness considered harmful to data

2019-10-19 Thread Andrew Dunstan


On 10/18/19 3:10 PM, Mark Felder wrote:
>
> On Fri, Oct 18, 2019, at 12:37, Ariadne Conill wrote:
>> Hello,
>>
>> I am one of the primary maintainers of Pleroma, a federated social
>> networking application written in Elixir, which uses PostgreSQL in
>> ways that may be considered outside the typical usage scenarios for
>> PostgreSQL.
>>
>> Namely, we leverage JSONB heavily as a backing store for JSON-LD
>> documents[1].  We also use JSONB in combination with Ecto's "embedded
>> structs" to store things like user preferences.
>>
>> The fact that we can use JSONB to achieve our design goals is a
>> testament to the flexibility PostgreSQL has.
>>
>> However, in the process of doing so, we have discovered a serious flaw
>> in the way jsonb_set() functions, but upon reading through this
>> mailing list, we have discovered that this flaw appears to be an
>> intentional design.[2]
>>
>> A few times now, we have written migrations that do things like copy
>> keys in a JSONB object to a new key, to rename them.  These migrations
>> look like so:
>>
>>update users set info=jsonb_set(info, '{bar}', info->'foo');
>>
>> Typically, this works nicely, except for cases where evaluating
>> info->'foo' results in an SQL null being returned.  When that happens,
>> jsonb_set() returns an SQL null, which then results in data loss.[3]
>>
>> This is not acceptable.  PostgreSQL is a database that is renowned for
>> data integrity, but here it is wiping out data when it encounters a
>> failure case.  The way jsonb_set() should fail in this case is to
>> simply return the original input: it should NEVER return SQL null.
>>
>> But hey, we've been burned by this so many times now that we'd like to
>> donate a useful function to the commons, consider it a mollyguard for
>> the real jsonb_set() function.
>>
>> create or replace function safe_jsonb_set(target jsonb, path
>> text[], new_value jsonb, create_missing boolean default true) returns
>> jsonb as $$
>> declare
>>   result jsonb;
>> begin
>>   result := jsonb_set(target, path, coalesce(new_value,
>> 'null'::jsonb), create_missing);
>>   if result is NULL then
>> return target;
>>   else
>> return result;
>>   end if;
>> end;
>> $$ language plpgsql;
>>
>> This safe_jsonb_set() wrapper should not be necessary.  PostgreSQL's
>> own jsonb_set() should have this safety feature built in.  Without it,
>> using jsonb_set() is like playing russian roulette with your data,
>> which is not a reasonable expectation for a database renowned for its
>> commitment to data integrity.
>>
>> Please fix this bug so that we do not have to hack around this bug.
>> It has probably ruined countless people's days so far.  I don't want
>> to hear about how the function is strict, I'm aware it is strict, and
>> that strictness is harmful.  Please fix the function so that it is
>> actually safe to use.
>>
>> [1]: JSON-LD stands for JSON Linked Data.  Pleroma has an "internal
>> representation" that shares similar qualities to JSON-LD, so I use
>> JSON-LD here as a simplification.
>>
>> [2]: 
>> https://www.postgresql.org/message-id/flat/qfkua9$2q0e$1...@blaine.gmane.org
>>
>> [3]: https://git.pleroma.social/pleroma/pleroma/issues/1324 is an
>> example of data loss induced by this issue.
>>
>> Ariadne
>>
> This should be directed towards the hackers list, too.
>
> What will it take to change the semantics of jsonb_set()? MySQL implements 
> safe behavior here. It's a real shame Postgres does not. I'll offer a $200 
> bounty to whoever fixes it. I'm sure it's destroyed more than $200 worth of 
> data and people's time by now, but it's something.
>
>


The hyperbole here is misplaced. There is a difference between a bug and
a POLA violation. This might be the latter, but it isn't the former. So
please tone it down a bit. It's not the function that's unsafe, but the
ill-informed use of it.


We invented jsonb_set() (credit to Dmitry Dolgov). And we've had it
since 9.5. That's five releases ago.  So it's a bit late to be coming to
us telling us it's not safe (according to your preconceptions of what it
should be doing).


We could change it prospectively (i.e. from release 13 on) if we choose.
But absent an actual bug (i.e. acting contrary to documented behaviour)
we do not normally backpatch such changes, especially when there is a
simple workaround for the perceived problem. And it's that policy that
is in large measure responsible for Postgres' deserved reputation for
stability.


Incidentally, why is your function written in plpgsql? Wouldn't a simple
SQL wrapper be better?


create or replace function safe_jsonb_set
    (target jsonb, path text[], new_value jsonb, create_missing
boolean default true)
returns jsonb as
$func$
    select case when new_value is null then target else
jsonb_set(target, path, new_value, create_missing) end
$func$ language sql;


And if we were to change it I'm not at all sure that we 

Re: jsonb_set() strictness considered harmful to data

2019-10-19 Thread Stephen Frost
Greetings,

* Dmitry Dolgov (9erthali...@gmail.com) wrote:
> If we want to change it, the question is where to stop? Essentially we have:
> 
> update table set data = some_func(data, some_args_with_null);
> 
> where some_func happened to be jsonb_set, but could be any strict function.

I don't think it makes any sense to try and extrapolate this out to
other strict functions.  Functions should be strict when it makes sense
for them to be- in this case, it sounds like it doesn't really make
sense for jsonb_set to be strict, and that's where we stop it.

> I wonder if in this case it makes sense to think about an alternative? For
> example, there is generic type subscripting patch, that allows to update a
> jsonb in the following way:
> 
> update table set jsonb_data[key] = 'value';
> 
> It doesn't look like a function, so it's not a big deal if it will handle NULL
> values differently. And at the same time one can argue, that people, who are
> not aware about this caveat with jsonb_set and NULL values, will most likely
> use it due to a bit simpler syntax (more similar to some popular programming
> languages).

This seems like an entirely independent thing ...

Thanks,

Stephen


signature.asc
Description: PGP signature


Re: jsonb_set() strictness considered harmful to data

2019-10-19 Thread Christoph Moench-Tegeder
## Ariadne Conill (aria...@dereferenced.org):

> NULL propagation makes sense in the context of traditional SQL.  What
> users expect from the JSONB support is for it to behave as JSON
> manipulation behaves everywhere else.

Well, some users expect that. Others are using this interface as it is
documented and implemented right now. And that's what makes this a
somewhat difficult case: I wouldn't argue for one behaviour or the
other if this was new functionality. But jsonb_set() was added in 9.5,
and changing that behaviour now will make other people about as unhappy
as you are right now.
Further, "now" is a rather flexible term: the function cannot be changed
"right now" with the next bugfix release (may break existing applications,
deterring people from installing bugfixes: very bad) and there's about
no way to get a new function into a bugfix release (catversion bump).
The next chance to do anything here is version 13, to be expected around
this time next year. This gives us ample time to think about a solution
which is consistent and works for (almost) everyone - no need to force
a behaviour change in that function right now (and in case it comes to
that: which other json/jsonb-functions would be affected?).

That creates a kind of bind for your case: you cannot rely on the new
behaviour until the new version is in reasonably widespread use.
Database servers are long-lived beasts - in the field, version 8.4
has finally mostly disappeared this year, but we still get some
questions about that version here on the lists (8.4 went EOL over
five years ago). At some point, you'll need to make a cut and require
your users to upgrade the database.

>   At some point, you have to start pondering whether the behaviour
> does not make logical sense in the context that people frame the JSONB
> type and it's associated manipulation functions.

But it does make sense from a SQL point of view - and this is a SQL
database. JSON is not SQL (the sheer amount of "Note" in between the
JSON functions and operators documentation is proof of that) and nots
ASN.1, "people expect" depends a lot on what kind of people you ask. 
None of these expectations is "right" or "wrong" in an absolute manner.
Code has to be "absolute" in order to be deterministic, and it should
do so in a way that is unsurprising to the least amount of users: I'm
willing to concede that jsonb_set() fails this test, but I'm still not
convinced that your approach is much better just because it fits your
specific use case.

> It is not *obvious*
> that jsonb_set() will trash your data, but that is what it is capable
> of doing.

It didn't. The data still fit the constraints you put on it: none,
unfortunately. Which leads me to the advice for the time being (until
we have this sorted out in one way or another, possibly the next
major release): at least put a NOT NULL on columns which must be not
NULL - that alone would have gone a long way to prevent the issues
you've unfortunately had. You could even put CHECK constraints on
your JSONB (like "CHECK (j->'info' IS NOT NULL)") to make sure it
stays well-formed. As a SQL person, I'd even argue that you shouldn't
use JSON columns for key data - there is a certain mismatch between
SQL and JSON, which will get you now and then, and once you've
implemented all the checks to be safe, you've build a type system
when the database would have given you one for free. (And running
UPDATEs inside your JSONB fields is not as efficient as on simple
columns).
And finally, you might put some version information in your
database schema, so the application can check if all the neccessary
data migrations have been run.

Regards,
Christoph

-- 
Spare Space




Re: jsonb_set() strictness considered harmful to data

2019-10-19 Thread Dmitry Dolgov
> On Sat, Oct 19, 2019 at 1:08 PM Tomas Vondra  
> wrote:
>
> >Here is how other implementations handle this case:
> >
> >MySQL/MariaDB:
> >
> >select json_set('{"a":1,"b":2,"c":3}', '$.a', NULL) results in:
> >   {"a":null,"b":2,"c":3}
> >
> >Microsoft SQL Server:
> >
> >select json_modify('{"a":1,"b":2,"c":3}', '$.a', NULL) results in:
> >   {"b":2,"c":3}
> >
> >Both of these outcomes make sense, given the nature of JSON objects.
> >I am actually more in favor of what MSSQL does however, I think that
> >makes the most sense of all.
> >
>
> I do mostly agree with this. The json[b]_set behavior seems rather
> surprising, and I think I've seen a couple of cases running into exactly
> this issue. I've solved that with a simple CASE, but maybe changing the
> behavior would be better. That's unlikely to be back-patchable, though,
> so maybe a better option is to create a non-strict wrappers. But that
> does not work when the user is unaware of the behavior :-(

Agree, that could be confusing. If I remember correctly, so far I've seen four
or five such complains in mailing lists, but of course number of people who
didn't reach out hackers is probably bigger.

If we want to change it, the question is where to stop? Essentially we have:

update table set data = some_func(data, some_args_with_null);

where some_func happened to be jsonb_set, but could be any strict function.

I wonder if in this case it makes sense to think about an alternative? For
example, there is generic type subscripting patch, that allows to update a
jsonb in the following way:

update table set jsonb_data[key] = 'value';

It doesn't look like a function, so it's not a big deal if it will handle NULL
values differently. And at the same time one can argue, that people, who are
not aware about this caveat with jsonb_set and NULL values, will most likely
use it due to a bit simpler syntax (more similar to some popular programming
languages).




Re: Has there been any discussion of custom dictionaries being defined in the database?

2019-10-19 Thread Tomas Vondra

On Thu, Oct 17, 2019 at 11:52:39AM +0200, Tom Lane wrote:

Morris de Oryx  writes:

Given that Amazon is bragging this week about turning off Oracle, it seems
like they could kick some resources towards contributing something to the
Postgres project. With that in mind, is the idea of defining dictionaries
within a table somehow meritless, or unexpectedly difficult?


Well, it'd just be totally different.  I don't think anybody cares to
provide two separate definitions of common dictionaries (which'd have to
somehow be kept in sync).

As for why we did it with external text files in the first place ---
for at least some of the dictionary types, the point is that you can
drop in data files that are available from upstream sources, without any
modification.  Getting the same info into a table would require some
nonzero amount of data transformation.



IMHO being able to load dictionaries from a table would be quite
useful, and not just because of RDS. For example, it's not entirely true
we're just using the upstream dictionaries verbatim - it's quite common
to add new words, particularly in specialized fields. That's way easier
when you can do that through a table and not through a file.


Having said that ... in the end a dictionary is really just a set of
functions implementing the dictionary API; where they get their data
from is their business.  So in theory you could roll your own
dictionary that gets its data out of a table.  But the dictionary API
would be pretty hard to implement except in C, and I bet RDS doesn't
let you install your own C functions either :-(



Not sure. Of course, if we expect the dictionary to work just like the
ispell one, with preprocessing the dictionary into shmem, then that
requires C. I don't think that's entirely necessary, thoug - we could
use the table directly. Yes, that would be slower, but maybe it'd be
sufficient.

But I think the idea is ultimately that we'd implement a new dict type
in core, and people would just specify which table to load data from.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: releasing space

2019-10-19 Thread Tomas Vondra

On Thu, Oct 17, 2019 at 05:20:09PM +, Julie Nishimura wrote:

Hello everybody, We are running PostgreSQL 9.6.2 cluster master ->
standby (streaming replication). 22 tb of space (constantly struggling
with the space, pruning the old data, but not fast enough). The biggest
db takes 16 tb. So, we've copied it to another server, and now we would
like to delete it from our original source, to free up the space. What
would be the right approach for this?  Just issue drop database command
(16tb). How long it might take? Should we do it gradually (drop biggest
tables first)? Any suggestions? Caveats?



Generally speaking, DROP DATABASE simply recursively drops all the
various objects - indexes, tables, etc. It mostly just deleting the
files, which should not be very expensive (we certainly don't need to
delete all the data or anything), but there's certain number of I/O
involved. But it does depend on the OS / filesystem / hardware if that's
an issue.

So if you want to be on the safe side, you can drop the objects one by
one, with a bit of delay between them, to throttle the I/O a bit.

FWIW the latest minor release for 9.6 is 9.6.15, you're 13 minor
versions (~30 months) of fixes behind. You might want to consider
upgrading ...


--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Replication of Replication

2019-10-19 Thread Tomas Vondra

On Fri, Oct 18, 2019 at 05:54:34PM -0400, Edilmar Alves wrote:
I have 3 servers running CentOS+PG 11 
(postgresql11-server-11.5-1PGDG.rhel7.x86_64):

- s1: main db + publication
- s2: subscription of the main db
all works fine until here...
Now, I tried to config this:
- s2: publication of the same db that was replicated from s1
- s3: subscription of the db from s2
When I did this, replication s1=>s2 stopped, and replication s2=>s3 
never worked.

Is this not possible?


You'll have to share more details - error messages from the server log,
how you check that the replication stopped, etc.

regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Securing records using linux grou permissions

2019-10-19 Thread Peter J. Holzer
On 2019-10-15 13:10:13 -0400, David Gauthier wrote:
> I was hoping there was a way to integrate the user/permissions/groups in linux
> with the PG permissions functionality. 

You can at least map the OS users to DB roles by using the peer or ident
authentication schemes. This way the users won't have to enter their
passwords again. But I think this works only if the client and the
server are on the same host. And you still have to maintain the groups,
although that should be easy to automate.

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature


Re: jsonb_set() strictness considered harmful to data

2019-10-19 Thread Ariadne Conill
Hello,

On Sat, Oct 19, 2019 at 12:52 AM Pavel Stehule  wrote:
>
>
>
> so 19. 10. 2019 v 7:41 odesílatel David G. Johnston 
>  napsal:
>>
>> On Friday, October 18, 2019, Pavel Stehule  wrote:
>>
>>>
>>> Probably there will be some applications that needs NULL result in 
>>> situations when value was not changed or when input value has not expected 
>>> format. Design using in Postgres allows later customization - you can 
>>> implement with COALESCE very simply behave that you want (sure, you have to 
>>> know what you do). If Postgres implement design used by MySQL, then there 
>>> is not any possibility to react on situation when update is not processed.
>>
>>
>> A CASE expression seems like it would work well for such detection in the 
>> rare case it is needed.  Current behavior is unsafe with minimal or no 
>> redeeming qualities.  Change it so passing in null raises an exception and 
>> make the user decide their own behavior if we don’t want to choose one for 
>> them.
>
>
> How you can do it? Buildn functions cannot to return more than one value. The 
> NULL is one possible signal how to emit this informations.
>
> The NULL value can be problem everywhere - and is not consistent to raise 
> exception somewhere and elsewhere not.
>
> I agree so the safe way is raising exception on NULL. Unfortunately, 
> exception handling is pretty expensive in Postres (more in write 
> transactions), so it should be used only when it is really necessary.

I would say that any thing like

update whatever set column=jsonb_set(column, '{foo}', NULL)

should throw an exception.  It should do, literally, *anything* else
but blank that column.

Ariadne