Re: CPU SPIKE

2019-10-20 Thread Pavel Stehule
Hi

po 21. 10. 2019 v 7:14 odesílatel Pawan Sharma 
napsal:

> Hi All,
>
> Having real high CPU issue (95-98%), with SELECT statements and select
> queries contains multiple AND operator, is it will cause any CPU Spike..???
>
> apps team is using sub-partition, PG11, CPU:24, Mem: 16GB
>
> ```test=# show shared_buffers ;
>  shared_buffers
> 
>  7680MB
> (1 row)
>
> test=# show work_mem ;
>  work_mem
> --
>  104MB
> (1 row)
>
> test=# show maintenance_work_mem ;
>  maintenance_work_mem
> --
>  1GB
> (1 row)
>
> test=# show effective_cache_size ;
>  effective_cache_size
> --
>  22GB
> (1 row)
>
> test=# show max_worker_processes;
>  max_worker_processes
> --
>  8
> (1 row)
>
> test=#```
>

It's hard to say what is reason. But good tool for debugging is `perf top`
if you are on linux. Then you can see what routines uses CPU intensively.

Regards

Pavel


>
> Regards,
> Pawan
>


CPU SPIKE

2019-10-20 Thread Pawan Sharma
Hi All,

Having real high CPU issue (95-98%), with SELECT statements and select
queries contains multiple AND operator, is it will cause any CPU Spike..???

apps team is using sub-partition, PG11, CPU:24, Mem: 16GB

```test=# show shared_buffers ;
 shared_buffers

 7680MB
(1 row)

test=# show work_mem ;
 work_mem
--
 104MB
(1 row)

test=# show maintenance_work_mem ;
 maintenance_work_mem
--
 1GB
(1 row)

test=# show effective_cache_size ;
 effective_cache_size
--
 22GB
(1 row)

test=# show max_worker_processes;
 max_worker_processes
--
 8
(1 row)

test=#```


Regards,
Pawan


Re: jsonb_set() strictness considered harmful to data

2019-10-20 Thread Abelard Hoffman
>
>
>> I would argue that only if the target parameter (the actual json value)
> is NULL should the result be NULL. The function is documented as returning
> the target, with modifications to a small part of its structure as
> specified by the other parameters. It is strange for the result to suddenly
> collapse down to NULL just because another parameter is NULL. Perhaps if
> the path is NULL, that can mean "don't update". And if create_missing is
> NULL, that should mean the same as not specifying it. I think. At a
> minimum, if we don't change it, the documentation needs to get one of those
> warning boxes alerting people that the functions will destroy their input
> entirely rather than slightly modifying it if any of the other parameters
> are NULL.
>
> My only doubt about any of this is that by the same argument, functions
> like replace() should not return NULL if the 2nd or 3rd parameter is NULL.
> I'm guessing replace() is specified by SQL and also unchanged in many
> versions so therefore not eligible for re-thinking but it still gives me
> just a bit of pause.
>

That's the essential difference though, no? With jsonb, conceptually, we
have a nested row. That's where we get confused. We think that the
operation should affect the element within the nested structure, not the
structure itself.

It would be equivalent to replace() nulling out the entire row on null.

I understand the logic behind it, but I also definitely see why it's not
intuitive.

AH


Re: jsonb_set() strictness considered harmful to data

2019-10-20 Thread rob stone
Hello,

On Sun, 2019-10-20 at 18:51 -0400, Andrew Dunstan wrote:
> On 10/20/19 4:18 PM, Tomas Vondra wrote:
> > 
> >https://www.postgresql.org/docs/12/functions-json.html
> > 
> > but that says nothing about how jsonb_set works with NULL values :-
> > (
> 
> 
> We should certainly fix that. I accept some responsibility for the
> omission.
> 
> 
> 

FWIW, if you are able to update the documentation, the current JSON RFC
is 8259.

https://tools.ietf.org/html/rfc8259


Cheers,
Rob








Re: jsonb_set() strictness considered harmful to data

2019-10-20 Thread raf
Steven Pousty wrote:

> I would think though that raising an exception is better than a
> default behavior which deletes data.

I can't help but feel the need to make the point that
the function is not deleting anything. It is just
returning null. The deletion of data is being performed
by an update statement that uses the function's return
value to set a column value.

I don't agree that raising an exception in the function
is a good idea (perhaps unless it's valid to assume
that this function will only ever be used in such a
context). Making the column not null (as already
suggested) and having the update statement itself raise
the exception seems more appropriate if an exception is
desirable. But that presumes an accurate understanding
of the behaviour of jsonb_set.

Really, I think the best fix would be in the
documentation so that everyone who finds the function
in the documentation understands its behaviour
immediately. I didn't even know there was such a thing
as a strict function or what it means and the
documentation for jsonb_set doesn't mention that it is
a strict function and the examples of its use don't
demonstrate this behaviour. I'm referring to
https://www.postgresql.org/docs/9.5/functions-json.html.

All of this contributes to the astonishment encountered
here. Least astonishment can probably be achieved with
additional documentation but it has to be where the
reader is looking when they first encounter the
function in the documentation so that their
expectations are set correctly and set early. And
documentation can be "fixed" sooner than postgresql 13.

Perhaps an audit of the documentation for all strict
functions would be a good idea to see if they need
work. Knowing that a function won't be executed at all
and will effectively return null when given a null
argument might be important to know for other functions
as well.

cheers,
raf





Re: jsonb_set() strictness considered harmful to data

2019-10-20 Thread Andrew Dunstan


On 10/20/19 4:18 PM, Tomas Vondra wrote:
> On Sun, Oct 20, 2019 at 03:48:05PM -0400, Andrew Dunstan wrote:
>>
>> On 10/20/19 1:14 PM, David G. Johnston wrote:
>>> On Sun, Oct 20, 2019 at 5:31 AM Andrew Dunstan
>>> >> > wrote:
>>>
>>>     And yet another is to
>>>     raise an exception, which is easy to write but really punts the
>>> issue
>>>     back to the application programmer who will have to decide how to
>>>     ensure
>>>     they never pass in a NULL parameter.
>>>
>>>
>>> That's kinda the point - if they never pass NULL they won't encounter
>>> any problems but as soon as the data and their application don't see
>>> eye-to-eye the application developer has to decide what they want to
>>> do about it.  We are in no position to decide for them and making it
>>> obvious they have a decision to make and implement here doesn't seem
>>> like a improper position to take.
>>
>>
>> The app dev can avoid this problem today by making sure they don't pass
>> a NULL as the value. Or they can use a wrapper function which does that
>> for them. So frankly this doesn't seem like much of an advance. And, as
>> has been noted, it's not consistent with what either MySQL or MSSQL do.
>> In general I'm not that keen on raising an exception for cases like
>> this.
>>
>
> I think the general premise of this thread is that the application
> developer does not realize that may be necessary, because it's a bit
> surprising behavior, particularly when having more experience with other
> databases that behave differently. It's also pretty easy to not notice
> this issue for a long time, resulting in significant data loss.
>
> Let's say you're used to the MSSQL or MySQL behavior, you migrate your
> application to PostgreSQL or whatever - how do you find out about this
> behavior? Users are likely to visit
>
>    https://www.postgresql.org/docs/12/functions-json.html
>
> but that says nothing about how jsonb_set works with NULL values :-(



We should certainly fix that. I accept some responsibility for the omission.



>
> You're right raising an exception may not be the "right behavior" for
> whatever definition of "right". But I kinda agree with David that it's
> somewhat reasonable when we don't know what the "universally correct"
> thing is (or when there's no such thing). IMHO that's better than
> silently discarding some of the data.


I'm not arguing against the idea of improving the situation. But I am
arguing against a minimal fix that will not provide much of value to a
careful app developer. i.e. I want to do more to support app devs.
Ideally they would not need to use wrapper functions. There will be
plenty of situations where it is mighty inconvenient to catch an
exception thrown by jsonb_set(). And catching exceptions can be
expensive. You want to avoid that if possible in your
performance-critical plpgsql code.



>
> FWIW I think the JSON/JSONB part of our code base is amazing, and the
> fact that various other databases adopted something very similar over
> the last couple of years just confirms that. And if this is the only
> speck of dust in the API, I think that's pretty amazing.


TY. When I first saw the SQL/JSON spec I thought I should send a request
to the SQL standards committee for a royalty payment, since it looked so
familiar ;-)


>
> I'm not sure how significant this issue actually is - it's true we got a
> couple of complaints over the years (judging by a quick search for
> jsonb_set and NULL in the archives), but I'm not sure that's enough to
> justify any changes in backbranches. I'd say no, but I have no idea how
> many people are affected by this but don't know about it ...
>
>

No, no backpatching. As I said upthread, this isn't a bug, but it is
arguably a POLA violation, which is why we should do something for
release 13.


cheers


andrew


-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services





Re: jsonb_set() strictness considered harmful to data

2019-10-20 Thread Paul A Jungwirth
> That said, I think it is reasonable that a PostgreSQL JSON function
> behaves in the way that JSON users would expect, so here is my +1 for
> interpreting an SQL NULL as a JSON null in the above case

Just to chime in as another application developer: the current
functionality does seem pretty surprising and dangerous to me. Raising
an exception seems pretty annoying. Setting the key's value to a JSON
null would be fine, but I also like the idea of removing the key
entirely, since that gives you strictly more functionality: you can
always set the key to a JSON null by passing one in, if that's what
you want. But there are lots of other functions that convert SQL NULL
to JSON null:

postgres=# select row_to_json(row(null)), json_build_object('foo',
null), json_object(array['foo', null]), json_object(array['foo'],
array[null]);
 row_to_json | json_build_object |  json_object   |  json_object
-+---++
 {"f1":null} | {"foo" : null}| {"foo" : null} | {"foo" : null}
(1 row)

(The jsonb variants give the same results.)

I think those functions are very similar to json_set here, and I'd
expect json_set to do what they do (i.e. convert SQL NULL to JSON
null).

Paul




Re: jsonb_set() strictness considered harmful to data

2019-10-20 Thread Laurenz Albe
On Fri, 2019-10-18 at 21:18 -0500, Ariadne Conill wrote:
> postgres=# \pset null '(null)'
> Null display is "(null)".
> postgres=# select jsonb_set('{"a":1,"b":2,"c":3}'::jsonb, '{a}', NULL);
> jsonb_set
> ---
> (null)
> (1 row)
> 
> This behaviour is basically giving an application developer a loaded
> shotgun and pointing it at their feet.  It is not a good design.  It
> is a design which has likely lead to many users experiencing
> unintentional data loss.

I understand your sentiments, even if you voiced them too drastically for
my taste.

The basic problem is that SQL NULL and JSON null have different semantics,
and while it is surprising for you that a database function returns NULL
if an argument is NULL, many database people would be surprised by the
opposite.  Please have some understanding.

That said, I think it is reasonable that a PostgreSQL JSON function
behaves in the way that JSON users would expect, so here is my +1 for
interpreting an SQL NULL as a JSON null in the above case, so that the
result of the above becomes
{"a": null, "b": 2, "c": 3}

-1 for backpatching such a change.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: jsonb_set() strictness considered harmful to data

2019-10-20 Thread Steven Pousty
I would think though that raising an exception is better than a default
behavior which deletes data.
As an app dev I am quite used to all sorts of "APIs" throwing exceptions
and have learned to deal with them.

This is my way of saying that raising an exception is an improvement over
the current situation. May not be the "best" solution but definitely an
improvement.
Thanks
Steve

On Sun, Oct 20, 2019 at 12:48 PM Andrew Dunstan <
andrew.duns...@2ndquadrant.com> wrote:

>
> On 10/20/19 1:14 PM, David G. Johnston wrote:
> > On Sun, Oct 20, 2019 at 5:31 AM Andrew Dunstan
> >  > > wrote:
> >
> > And yet another is to
> > raise an exception, which is easy to write but really punts the issue
> > back to the application programmer who will have to decide how to
> > ensure
> > they never pass in a NULL parameter.
> >
> >
> > That's kinda the point - if they never pass NULL they won't encounter
> > any problems but as soon as the data and their application don't see
> > eye-to-eye the application developer has to decide what they want to
> > do about it.  We are in no position to decide for them and making it
> > obvious they have a decision to make and implement here doesn't seem
> > like a improper position to take.
>
>
> The app dev can avoid this problem today by making sure they don't pass
> a NULL as the value. Or they can use a wrapper function which does that
> for them. So frankly this doesn't seem like much of an advance. And, as
> has been noted, it's not consistent with what either MySQL or MSSQL do.
> In general I'm not that keen on raising an exception for cases like this.
>
>
> cheers
>
>
> andrew
>
>
> --
> Andrew Dunstanhttps://www.2ndQuadrant.com
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>
>
>


Re: jsonb_set() strictness considered harmful to data

2019-10-20 Thread Tomas Vondra

On Sun, Oct 20, 2019 at 03:48:05PM -0400, Andrew Dunstan wrote:


On 10/20/19 1:14 PM, David G. Johnston wrote:

On Sun, Oct 20, 2019 at 5:31 AM Andrew Dunstan
mailto:andrew.duns...@2ndquadrant.com>> wrote:

And yet another is to
raise an exception, which is easy to write but really punts the issue
back to the application programmer who will have to decide how to
ensure
they never pass in a NULL parameter.


That's kinda the point - if they never pass NULL they won't encounter
any problems but as soon as the data and their application don't see
eye-to-eye the application developer has to decide what they want to
do about it.  We are in no position to decide for them and making it
obvious they have a decision to make and implement here doesn't seem
like a improper position to take.



The app dev can avoid this problem today by making sure they don't pass
a NULL as the value. Or they can use a wrapper function which does that
for them. So frankly this doesn't seem like much of an advance. And, as
has been noted, it's not consistent with what either MySQL or MSSQL do.
In general I'm not that keen on raising an exception for cases like this.



I think the general premise of this thread is that the application
developer does not realize that may be necessary, because it's a bit
surprising behavior, particularly when having more experience with other
databases that behave differently. It's also pretty easy to not notice
this issue for a long time, resulting in significant data loss.

Let's say you're used to the MSSQL or MySQL behavior, you migrate your
application to PostgreSQL or whatever - how do you find out about this
behavior? Users are likely to visit

   https://www.postgresql.org/docs/12/functions-json.html

but that says nothing about how jsonb_set works with NULL values :-(

You're right raising an exception may not be the "right behavior" for
whatever definition of "right". But I kinda agree with David that it's
somewhat reasonable when we don't know what the "universally correct"
thing is (or when there's no such thing). IMHO that's better than
silently discarding some of the data.

FWIW I think the JSON/JSONB part of our code base is amazing, and the
fact that various other databases adopted something very similar over
the last couple of years just confirms that. And if this is the only
speck of dust in the API, I think that's pretty amazing.

I'm not sure how significant this issue actually is - it's true we got a
couple of complaints over the years (judging by a quick search for
jsonb_set and NULL in the archives), but I'm not sure that's enough to
justify any changes in backbranches. I'd say no, but I have no idea how
many people are affected by this but don't know about it ...

regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 





Re: jsonb_set() strictness considered harmful to data

2019-10-20 Thread Andrew Dunstan


On 10/20/19 1:14 PM, David G. Johnston wrote:
> On Sun, Oct 20, 2019 at 5:31 AM Andrew Dunstan
>  > wrote:
>
> And yet another is to
> raise an exception, which is easy to write but really punts the issue
> back to the application programmer who will have to decide how to
> ensure
> they never pass in a NULL parameter.
>
>
> That's kinda the point - if they never pass NULL they won't encounter
> any problems but as soon as the data and their application don't see
> eye-to-eye the application developer has to decide what they want to
> do about it.  We are in no position to decide for them and making it
> obvious they have a decision to make and implement here doesn't seem
> like a improper position to take.


The app dev can avoid this problem today by making sure they don't pass
a NULL as the value. Or they can use a wrapper function which does that
for them. So frankly this doesn't seem like much of an advance. And, as
has been noted, it's not consistent with what either MySQL or MSSQL do.
In general I'm not that keen on raising an exception for cases like this.


cheers


andrew


-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services





Re: Too many SET TimeZone and Application_name queries

2019-10-20 Thread Jeff Janes
On Fri, Oct 11, 2019 at 7:49 AM Amarendra Konda 
wrote:

> Hi,
>
> In our test environment, it was observed that there are too many queries
> were getting fired to the database server,
>

What does "too many" mean here?  Is it just more than you like to see in
your log file, or is there some objective problem?

These look like housekeeping queries which are executed by a connection
pooler each time a connection is checked out of the pool (or perhaps
checked back in).  However, they don't seem to be the housekeeping queries
which pgbouncer itself uses.  I don't think that JDBC automatically issues
them either, although that might depend on your configuration.  So I think
that leaves Tomcat as the most likely culprit.  Tomcat does offer a
connection pool.  Are you using it?

Cheers,

Jeff


Re: jsonb_set() strictness considered harmful to data

2019-10-20 Thread David G. Johnston
On Sun, Oct 20, 2019 at 5:31 AM Andrew Dunstan <
andrew.duns...@2ndquadrant.com> wrote:

> And yet another is to
> raise an exception, which is easy to write but really punts the issue
> back to the application programmer who will have to decide how to ensure
> they never pass in a NULL parameter.


That's kinda the point - if they never pass NULL they won't encounter any
problems but as soon as the data and their application don't see eye-to-eye
the application developer has to decide what they want to do about it.  We
are in no position to decide for them and making it obvious they have a
decision to make and implement here doesn't seem like a improper position
to take.


> Possibly we could even add an extra
> parameter to specify what should be done.
>

Has appeal.



> Should we return NULL in those cases as we do now?
>

Probably the same thing - though I'd accept having the input json being
null result in the output json being null as well.

David J.


Re: UTC-6 or UTC+6?

2019-10-20 Thread Tom Lane
Luca Ferrari  writes:
> template1=# select '2019-10-22 16:00:00' at time zone 'Europe/Rome' as my_time
> , '2019-10-22 16:00:00' at time zone 'America/Denver' as what_should_be,
> '2019-10-22 16:00:00' at time zone 'UTC-6' as utc_minus_6,
> '2019-10-22 16:00:00' at time zone 'UTC+6' as utc_plus_6;
> -[ RECORD 1 ]--+
> my_time| 2019-10-22 16:00:00
> what_should_be | 2019-10-22 08:00:00
> utc_minus_6| 2019-10-22 20:00:00
> utc_plus_6 | 2019-10-22 08:00:00

> Now, the denver time should be 8:00, which is what is reported as
> UTC+6, but as far as I know Denver is UTC-6 (in daylight saving). What
> am I missing here?

Timezone names of that form are interpreted per the POSIX standard,
which uses positive for west-of-Greenwich, unlike the ISO standard
which uses the opposite sign.  See

https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-TIMEZONES

regards, tom lane




Re: jsonb_set() strictness considered harmful to data

2019-10-20 Thread Isaac Morland
On Sun, 20 Oct 2019 at 08:32, Andrew Dunstan 
wrote:

>
> Understood. I think the real question here is what it should do instead
> when the value is NULL. Your behaviour above is one suggestion, which I
> personally find intuitive. Another has been to remove the associated
> key. Another is to return the original target. And yet another is to
> raise an exception, which is easy to write but really punts the issue
> back to the application programmer who will have to decide how to ensure
> they never pass in a NULL parameter. Possibly we could even add an extra
> parameter to specify what should be done.
>

I vote for remove the key. If we make NULL and 'null'::jsonb the same,
we're missing an opportunity to provide more functionality. Sometimes it's
convenient to be able to handle both the "update" and "remove" cases with
one function, just depending on the parameter value supplied.

Also, the question will arise what to do when any of the other
> parameters are NULL. Should we return NULL in those cases as we do now?
>

I would argue that only if the target parameter (the actual json value) is
NULL should the result be NULL. The function is documented as returning the
target, with modifications to a small part of its structure as specified by
the other parameters. It is strange for the result to suddenly collapse
down to NULL just because another parameter is NULL. Perhaps if the path is
NULL, that can mean "don't update". And if create_missing is NULL, that
should mean the same as not specifying it. I think. At a minimum, if we
don't change it, the documentation needs to get one of those warning boxes
alerting people that the functions will destroy their input entirely rather
than slightly modifying it if any of the other parameters are NULL.

My only doubt about any of this is that by the same argument, functions
like replace() should not return NULL if the 2nd or 3rd parameter is NULL.
I'm guessing replace() is specified by SQL and also unchanged in many
versions so therefore not eligible for re-thinking but it still gives me
just a bit of pause.


Re: jsonb_set() strictness considered harmful to data

2019-10-20 Thread Andrew Dunstan


On 10/20/19 4:39 AM, Floris Van Nee wrote:
>
> FWIW I've been bitten by this 'feature' more than once as well,
> accidentally erasing a column. Now I usually write js = jsonb_set(js,
> coalesce(new_column, 'null'::jsonb)) to prevent erasing the whole
> column, and instead setting the value to a jsonb null value, but I
> also found the STRICT behavior very surprising at first..
>
>
>



Understood. I think the real question here is what it should do instead
when the value is NULL. Your behaviour above is one suggestion, which I
personally find intuitive. Another has been to remove the associated
key. Another is to return the original target. And yet another is to
raise an exception, which is easy to write but really punts the issue
back to the application programmer who will have to decide how to ensure
they never pass in a NULL parameter. Possibly we could even add an extra
parameter to specify what should be done.


Also, the question will arise what to do when any of the other
parameters are NULL. Should we return NULL in those cases as we do now?


cheers


andrew


-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services





UTC-6 or UTC+6?

2019-10-20 Thread Luca Ferrari
I'm a little confused by trying to get the time at a different
timezone, formerly UTC-6:

template1=# select '2019-10-22 16:00:00' at time zone 'Europe/Rome' as my_time
, '2019-10-22 16:00:00' at time zone 'America/Denver' as what_should_be,
'2019-10-22 16:00:00' at time zone 'UTC-6' as utc_minus_6,
'2019-10-22 16:00:00' at time zone 'UTC+6' as utc_plus_6;
-[ RECORD 1 ]--+
my_time| 2019-10-22 16:00:00
what_should_be | 2019-10-22 08:00:00
utc_minus_6| 2019-10-22 20:00:00
utc_plus_6 | 2019-10-22 08:00:00


Now, the denver time should be 8:00, which is what is reported as
UTC+6, but as far as I know Denver is UTC-6 (in daylight saving). What
am I missing here?

Thanks,
Luca




Re: jsonb_set() strictness considered harmful to data

2019-10-20 Thread Steve Atkins



On 19/10/2019 07:52, Ariadne Conill wrote:


I would say that any thing like

update whatever set column=jsonb_set(column, '{foo}', NULL)

should throw an exception.  It should do, literally, *anything* else
but blank that column.


steve=# create table foo (bar jsonb not null);
CREATE TABLE
steve=# insert into foo (bar) values ('{"a":"b"}');
INSERT 0 1
steve=# update foo set bar = jsonb_set(bar, '{foo}', NULL);
ERROR:  null value in column "bar" violates not-null constraint
DETAIL:  Failing row contains (null).
steve=# update foo set bar = jsonb_set(bar, '{foo}', 'null'::jsonb);
UPDATE 1

I don't see any behaviour that's particularly surprising there? Though I 
understand how an app developer who's light on SQL might get it wrong - 
and I've made similar mistakes in schema upgrade scripts without 
involving jsonb.


Cheers,
  Steve





jsonb_set() strictness considered harmful to data

2019-10-20 Thread Floris Van Nee
FWIW I've been bitten by this 'feature' more than once as well, accidentally 
erasing a column. Now I usually write js = jsonb_set(js, coalesce(new_column, 
'null'::jsonb)) to prevent erasing the whole column, and instead setting the 
value to a jsonb null value, but I also found the STRICT behavior very 
surprising at first..


-Floris