Index

2019-10-24 Thread Sonam Sharma
I have created indexes with certain set of columns. Now I want to remove
one of the column and add new column. Can this done without dropping the
index?


Re: Is this a bug ?

2019-10-24 Thread George Neuner
On Wed, 23 Oct 2019 11:27:12 -0500, Ron 
wrote:

>On 10/23/19 11:20 AM, Geoff Winkless wrote:
>> On Wed, 23 Oct 2019 at 17:09, Ron  wrote:
>>
>>> As much as I hate to say it, MSFT was right to ignore this bug in the 
>>> standard.
>> Standards are standards for a reason. It is almost never correct to
>> deliberately ignore them. If you don't like them, then revise the
>> standard.
>>
>> Historically Microsoft ignored standards either because they
>> misunderstood them or because they wanted to lock in their customers,
>> not for any reasons of altruism.
>>
>> For what it's worth, I can see a value to having
>>
>> SELECT 'this is quite a long string'
>> 'which I've joined together '
>> 'across multiple lines';
>>
>> although the advantage of it vs using a concat operator is slim.
>
>There is no advantage to using it vs using a concat operator, and all 
>disadvantage.

It WAS an advantage querying interactively on 80 character text mode
screens, and when SQL was embedded (directly) into programs written in
other languages.

Regardless of how recent the latest standard - SQL still has many ...
warts really, but I will be charitable and call them "vestiges" ... of
its roots as a 1970s language.

YMMV,
George





Re: jsonb_set() strictness considered harmful to data

2019-10-24 Thread Stuart McGraw

On 10/24/19 2:17 PM, Tom Lane wrote:

Laurenz Albe  writes:

On Wed, 2019-10-23 at 13:00 -0600, Stuart McGraw wrote:

It is less sensible with compound values where the rule can apply to
individual scalar components.


I agree that JSON can sensibly be viewed as a composite value, but ...


  And indeed that is what Postgresql does
for another compound type:

# select array_replace(array[1,2,3],2,NULL);
array_replace
---
{1,NULL,3}

The returned value is not NULL.  Why the inconsistency between the array
type and json type?


... the flaw in this argument is that the array element is actually
a SQL NULL when we're done.  To do something similar in the JSON case,
we have to translate SQL NULL to JSON null, and that's cheating to
some extent.  They're not the same thing (and I'll generally resist
proposals to, say, make SELECT 'null'::json IS NULL return true).

Maybe it's okay to make this case work like that, but don't be too
high and mighty about it being logically clean; it isn't.

regards, tom lane


Sure, but my point was not that this was a perfect "logically clean"
answer, just that the argument, which was made multiple times, that
the entire result should be NULL because "that's the way SQL NULLs
work" is not really right.

It does seem to me that mapping NULL to "null" is likely a workable
approach but that's just my uninformed opinion.




Re: EXPLAIN BUFFERS and I/O timing accounting questions

2019-10-24 Thread Andres Freund
Hi,

On 2019-10-24 16:31:39 -0700, Maciek Sakrejda wrote:
> On Thu, Oct 24, 2019 at 2:25 PM Andres Freund  wrote:
> > Note that the buffer access stats do *not* count the number of distinct
> > buffers accessed, but that they purely the number of buffer
> > accesses.
> 
> You mean, even within a single node? That is, if a node hits a block ten
> times, that counts as ten blocks hit? And if it reads a block and then
> needs it three more times, that's one read plus three hit?

Correct. It's basically the number of lookups in the buffer
pool. There's some nodes that will kind repeatedly use the same buffer,
without increasing the count. E.g. a seqscan will keep the current page
pinned until all the tuples on the page have been returned.

Consider e.g. an nested loop indexscan - how would we determine whether
we've previously looked at a buffer within the indexscan, without
drastically increasing the resources?


> > Do you have an example?
> 
> Sure, here's the "abridged" plan:
> 
> [{ "Plan": {
>   "Node Type": "Aggregate",
>   "Plan Rows": 1,
>   "Plan Width": 8,
>   "Total Cost": 26761745.14,
>   "Actual Rows": 1,
>   "I/O Read Time": 234129.299,
>   "I/O Write Time": 0,
>   "Local Hit Blocks": 0,
>   "Local Read Blocks": 0,
>   "Local Dirtied Blocks": 0,
>   "Local Written Blocks": 0,
>   "Shared Hit Blocks": 4847762,
>   "Shared Read Blocks": 1626312,
>   "Shared Dirtied Blocks": 541014,
>   "Shared Written Blocks": 0,
>   "Temp Read Blocks": 0,
>   "Temp Written Blocks": 4786,
>   "Plans": [
> {
>   "Node Type": "ModifyTable",
>   "Operation": "Delete",
>   "Parent Relationship": "InitPlan",
>   "Plan Rows": 13943446,
>   "Plan Width": 6,
>   "Total Cost": 25774594.63,
>   "Actual Rows": 2178416,
>   "I/O Read Time": 234129.299,
>   "I/O Write Time": 0,
>   "Local Hit Blocks": 0,
>   "Local Read Blocks": 0,
>   "Local Dirtied Blocks": 0,
>   "Local Written Blocks": 0,
>   "Shared Hit Blocks": 4847762,
>   "Shared Read Blocks": 1626312,
>   "Shared Dirtied Blocks": 541014,
>   "Shared Written Blocks": 0,
>   "Temp Read Blocks": 0,
>   "Temp Written Blocks": 0,
>   "Plans": ""
> },
...

I think this may be partially confusing due to the way the json output
looks. Which is so bad that it's imo fair to call it a bug.  Here's text
output to a similar-ish query:


Aggregate  (cost=112.50..112.51 rows=1 width=8) (actual time=35.893..35.894 
rows=1 loops=1)
  Output: count(*)
  Buffers: shared hit=6015 dirtied=15
  CTE foo
->  Delete on public.p  (cost=0.00..45.00 rows=3000 width=6) (actual 
time=0.235..28.239 rows=3000 loops=1)
  Output: p.data
  Delete on public.p
  Delete on public.c1
  Delete on public.c2
  Buffers: shared hit=6015 dirtied=15
  ->  Seq Scan on public.p  (cost=0.00..15.00 rows=1000 width=6) 
(actual time=0.161..1.375 rows=1000 loops=1)
Output: p.ctid
Buffers: shared hit=5 dirtied=5
  ->  Seq Scan on public.c1  (cost=0.00..15.00 rows=1000 width=6) 
(actual time=0.147..1.314 rows=1000 loops=1)
Output: c1.ctid
Buffers: shared hit=5 dirtied=5
  ->  Seq Scan on public.c2  (cost=0.00..15.00 rows=1000 width=6) 
(actual time=0.145..1.170 rows=1000 loops=1)
Output: c2.ctid
Buffers: shared hit=5 dirtied=5
  ->  CTE Scan on foo  (cost=0.00..60.00 rows=3000 width=0) (actual 
time=0.243..34.083 rows=3000 loops=1)
Output: foo.data
Buffers: shared hit=6015 dirtied=15
Planning Time: 0.508 ms
Execution Time: 36.512 ms

Note that the node below the Aggregate is actually the CTE, and that
that the DELETEs are below that. But the json, slightly abbreviated,
looks like:

[
  {
"Plan": {
  "Node Type": "Aggregate",
  "Strategy": "Plain",
  "Shared Hit Blocks": 6015,
  "Shared Read Blocks": 0,
  "Shared Dirtied Blocks": 15,
  "Shared Written Blocks": 0,
  "Plans": [
{
  "Node Type": "ModifyTable",
  "Operation": "Delete",
  "Parent Relationship": "InitPlan",
  "Subplan Name": "CTE foo",
  "Output": ["p.data"],
  "Target Tables": [
{
  "Relation Name": "p",
  "Schema": "public",
  "Alias": "p"
},
{
  "Relation Name": "c1",
  "Schema": "public",
  "Alias": "c1"
},
{
  "Relation Name": "c2",
  "Schema": "public",
  "Alias": "c2"
}
  ],
  "Shared Hit Blocks": 6015,
  "Shared Read Blocks": 0,
  "Shared Dirtied Blocks": 15,
  "Shared Written Blocks": 0,
  "Plans": [
{
  "Node Type": "Seq Scan",
  "Parent Relationship": "Member",
  "Output": ["p.ctid"],
  "Shared Hit Blocks": 5,
 

Re: EXPLAIN BUFFERS and I/O timing accounting questions

2019-10-24 Thread Maciek Sakrejda
On Thu, Oct 24, 2019 at 2:25 PM Andres Freund  wrote:
> Note that the buffer access stats do *not* count the number of distinct
> buffers accessed, but that they purely the number of buffer
> accesses.

You mean, even within a single node? That is, if a node hits a block ten
times, that counts as ten blocks hit? And if it reads a block and then
needs it three more times, that's one read plus three hit?

> Do you have an example?

Sure, here's the "abridged" plan:

[{ "Plan": {
  "Node Type": "Aggregate",
  "Plan Rows": 1,
  "Plan Width": 8,
  "Total Cost": 26761745.14,
  "Actual Rows": 1,
  "I/O Read Time": 234129.299,
  "I/O Write Time": 0,
  "Local Hit Blocks": 0,
  "Local Read Blocks": 0,
  "Local Dirtied Blocks": 0,
  "Local Written Blocks": 0,
  "Shared Hit Blocks": 4847762,
  "Shared Read Blocks": 1626312,
  "Shared Dirtied Blocks": 541014,
  "Shared Written Blocks": 0,
  "Temp Read Blocks": 0,
  "Temp Written Blocks": 4786,
  "Plans": [
{
  "Node Type": "ModifyTable",
  "Operation": "Delete",
  "Parent Relationship": "InitPlan",
  "Plan Rows": 13943446,
  "Plan Width": 6,
  "Total Cost": 25774594.63,
  "Actual Rows": 2178416,
  "I/O Read Time": 234129.299,
  "I/O Write Time": 0,
  "Local Hit Blocks": 0,
  "Local Read Blocks": 0,
  "Local Dirtied Blocks": 0,
  "Local Written Blocks": 0,
  "Shared Hit Blocks": 4847762,
  "Shared Read Blocks": 1626312,
  "Shared Dirtied Blocks": 541014,
  "Shared Written Blocks": 0,
  "Temp Read Blocks": 0,
  "Temp Written Blocks": 0,
  "Plans": ""
},
{
  "Node Type": "ModifyTable",
  "Operation": "Delete",
  "Parent Relationship": "InitPlan",
  "Plan Rows": 63897788,
  "Plan Width": 38,
  "Total Cost": 315448.53,
  "Actual Rows": 0,
  "I/O Read Time": 30529.231,
  "I/O Write Time": 0,
  "Local Hit Blocks": 0,
  "Local Read Blocks": 0,
  "Local Dirtied Blocks": 0,
  "Local Written Blocks": 0,
  "Shared Hit Blocks": 12964205,
  "Shared Read Blocks": 83260,
  "Shared Dirtied Blocks": 48256,
  "Shared Written Blocks": 0,
  "Temp Read Blocks": 4788,
  "Temp Written Blocks": 0,
  "Plans": ""
},
{
  "Node Type": "ModifyTable",
  "Operation": "Delete",
  "Parent Relationship": "InitPlan",
  "Plan Rows": 45657680,
  "Plan Width": 38,
  "Total Cost": 357974.43,
  "Actual Rows": 0,
  "I/O Read Time": 24260.512,
  "I/O Write Time": 0,
  "Local Hit Blocks": 0,
  "Local Read Blocks": 0,
  "Local Dirtied Blocks": 0,
  "Local Written Blocks": 0,
  "Shared Hit Blocks": 10521264,
  "Shared Read Blocks": 64450,
  "Shared Dirtied Blocks": 36822,
  "Shared Written Blocks": 0,
  "Temp Read Blocks": 4788,
  "Temp Written Blocks": 1,
  "Plans": ""
},
{
  "Node Type": "CTE Scan",
  "Parent Relationship": "Outer",
  "Plan Rows": 13943446,
  "Plan Width": 8,
  "Total Cost": 278868.92,
  "Actual Rows": 2178416,
  "I/O Read Time": 234129.299,
  "I/O Write Time": 0,
  "Local Hit Blocks": 0,
  "Local Read Blocks": 0,
  "Local Dirtied Blocks": 0,
  "Local Written Blocks": 0,
  "Shared Hit Blocks": 4847762,
  "Shared Read Blocks": 1626312,
  "Shared Dirtied Blocks": 541014,
  "Shared Written Blocks": 0,
  "Temp Read Blocks": 0,
  "Temp Written Blocks": 4786
}
  ]
}}]

Let me know if I removed anything I shouldn't have and I can follow up with
extra info.

>  I assume what's going on is that the cost of
> the CTE is actually attributed (in equal parts or something like that)
> to all places using the CTE. Do the numbers add up if you just exclude
> the CTE?

Not really--it looks like the full Shared Blocks Hit cost in the root is
the same as the CTE by itself. This is playing around with the plan in a
node console:

> p[0].Plan['Shared Hit Blocks']
4847762
> p[0].Plan.Plans.map(p => p['Node Type'])
[ 'ModifyTable', 'ModifyTable', 'ModifyTable', 'CTE Scan' ]
> p[0].Plan.Plans.map(p => p['Shared Hit Blocks'])
[ 4847762, 12964205, 10521264, 4847762 ]

> IIRC one can get multiple plans when there's a DO ALSO rule. There might
> be other ways to get there too.

Thanks, good to know.


Re: A very puzzling backup/restore problem

2019-10-24 Thread Tom Lane
Adrian Klaver  writes:
> On 10/24/19 2:58 PM, stan wrote:
>> So, it appears that this means that the function cannot be found, even if it 
>> is in the new
>> (default) schema.

> The original error was not about finding the function it was about not 
> finding the table in the function:

> psql:task_instance.dump:55: ERROR:  relation "permitted_work" does not exist
> LINE 3: permitted_work
>  ^

> You need to schema qualify the table name inside  the function.

Or attach a SET clause to the function, so that it executes with a
known search_path regardless of the session's prevailing path.

regards, tom lane




Re: A very puzzling backup/restore problem

2019-10-24 Thread Adrian Klaver

On 10/24/19 2:58 PM, stan wrote:


On Thu, Oct 24, 2019 at 07:40:29AM -0700, Adrian Klaver wrote:

On 10/24/19 7:32 AM, stan wrote:

On Thu, Oct 24, 2019 at 07:04:11AM -0700, Adrian Klaver wrote:

On 10/24/19 3:52 AM, stan wrote:



I have a very confusing isse. I am trying to backup and restre a signle
table .

first I dump the table.


Actually you are just dumping the table data.

More below.



Script started on 2019-10-24 06:29:12-0400
]0;stan@smokey: ~stan@smokey:~$ pg_dump -a --table task_instance  > 
task_instance.dump

Then I connect to the db, and verify that things are as expected.

]0;stan@smokey: ~stan@smokey:~$ psql
psql (11.5 (Ubuntu 11.5-3.pgdg18.04+1))
Type "help" for help.

[local] stan@stan=> \dt
List of relations
Schema |   Name   | Type  | Owner
+--+---+---
public | biz_constants| table | stan
public | bom_item | table | stan
public | costing_unit | table | stan
public | customer | table | stan
public | earthquake   | table | stan
public | employee | table | stan
public | expense_report_instance  | table | stan
public | gl_code  | table | stan
public | mfg  | table | stan
public | mfg_part | table | stan
public | mfg_vendor_relationship  | table | stan
public | permitted_work   | table | stan
public | phone_number_test| table | stan
public | project  | table | stan
public | project_budget_component | table | stan
public | project_cost_category| table | stan
public | rate | table | stan
public | salary   | table | stan
public | task_instance| table | stan
public | vendor   | table | stan
public | work_type| table | stan
(21 rows)

[local] stan@stan=> \d task_instance
[?1049h[?1h=  Table 
"public.task_instance"
   Column |   Type   | Collation | Nullable |   
  Default
---+--+---+--+--
 -
task_instance | integer  |   | not null | 
nextval('task _instance_key_serial'::regclass)
project_key   | integer  |   | not null |
employee_key  | integer  |   | not null |
work_type_key | integer  |   | not null |
hours | numeric(5,2) |   | not null |
work_start| timestamp with time zone |   | not null |
work_end  | timestamp with time zone |   | not null |
modtime   | timestamp with time zone |   | not null | 
CURRENT_TIMES TAMP
lock  | boolean  |   |  | true
descrip   | character varying|   |  |
Indexes:
   "task_instance_pkey" PRIMARY KEY, btree (task_instance)
   "task_constraint" UNIQUE CONSTRAINT, btree (employee_key, work_type_key, 
pro ject_key, work_start, work_end)
Foreign-key constraints:
   "task_instance_employee_key_fkey" FOREIGN KEY (employee_key) REFERENCES empl 
:[?1l>[?1049l[local] stan@stan=> \d permitted_work
[?1049h[?1h=Table 
"public.permitted_work"
   Column |   Type   | Collation | Nullable |  
Default  
---+--+---+--+--
 -
employee_key  | integer  |   | not null |
work_type_key | integer  |   | not null |
permit| boolean  |   | not null | false
modtime   | timestamp with time zone |   | not null | 
CURRENT_TIMES TAMP
Indexes:
   "permit_constraint" UNIQUE CONSTRAINT, btree (employee_key, 
work_type_key)
Foreign-key constraints:
   "permitted_work_employee_key_fkey" FOREIGN KEY (employee_key) REFERENCES 
emp loyee(employee_key) ON DELETE RESTRICT
   "permitted_work_work_type_key_fkey" FOREIGN KEY (work_type_key) 
REFERENCES w ork_type(work_type_key) ON DELETE RESTRICT

Then I delete the rows from the table.

(END)[?1l>[?1049l[local] stan@stan=> delete 
from task_instance ;
DELETE 31

Then I try to restore from the dump file.

[local] stan@stan=> \i task_instance.dump
SET
SET
SET
SET
SET
set_config

(1 row)

SET
SET
SET
SET
psql:task_instance.dump:55: ERROR:  relation "permitted_work" does not exist
LINE 3: permitted_work
   ^


In your  \d task_instance above I do not see a trigger that calls

Re: A very puzzling backup/restore problem

2019-10-24 Thread stan


On Thu, Oct 24, 2019 at 07:40:29AM -0700, Adrian Klaver wrote:
> On 10/24/19 7:32 AM, stan wrote:
> > On Thu, Oct 24, 2019 at 07:04:11AM -0700, Adrian Klaver wrote:
> > > On 10/24/19 3:52 AM, stan wrote:
> > > > 
> > > > 
> > > > I have a very confusing isse. I am trying to backup and restre a signle
> > > > table .
> > > > 
> > > > first I dump the table.
> > > 
> > > Actually you are just dumping the table data.
> > > 
> > > More below.
> > > > 
> > > > 
> > > > Script started on 2019-10-24 06:29:12-0400
> > > > ]0;stan@smokey: ~stan@smokey:~$ pg_dump -a --table task_instance  > 
> > > > task_instance.dump
> > > > 
> > > > Then I connect to the db, and verify that things are as expected.
> > > > 
> > > > ]0;stan@smokey: ~stan@smokey:~$ psql
> > > > psql (11.5 (Ubuntu 11.5-3.pgdg18.04+1))
> > > > Type "help" for help.
> > > > 
> > > > [local] stan@stan=> \dt
> > > >List of relations
> > > >Schema |   Name   | Type  | Owner
> > > > +--+---+---
> > > >public | biz_constants| table | stan
> > > >public | bom_item | table | stan
> > > >public | costing_unit | table | stan
> > > >public | customer | table | stan
> > > >public | earthquake   | table | stan
> > > >public | employee | table | stan
> > > >public | expense_report_instance  | table | stan
> > > >public | gl_code  | table | stan
> > > >public | mfg  | table | stan
> > > >public | mfg_part | table | stan
> > > >public | mfg_vendor_relationship  | table | stan
> > > >public | permitted_work   | table | stan
> > > >public | phone_number_test| table | stan
> > > >public | project  | table | stan
> > > >public | project_budget_component | table | stan
> > > >public | project_cost_category| table | stan
> > > >public | rate | table | stan
> > > >public | salary   | table | stan
> > > >public | task_instance| table | stan
> > > >public | vendor   | table | stan
> > > >public | work_type| table | stan
> > > > (21 rows)
> > > > 
> > > > [local] stan@stan=> \d task_instance
> > > > [?1049h[?1h=  Table 
> > > > "public.task_instance"
> > > >   Column |   Type   | Collation | Nullable |
> > > >  Default
> > > > ---+--+---+--+--
> > > >  -
> > > >task_instance | integer  |   | not null | 
> > > > nextval('task _instance_key_serial'::regclass)
> > > >project_key   | integer  |   | not null |
> > > >employee_key  | integer  |   | not null |
> > > >work_type_key | integer  |   | not null |
> > > >hours | numeric(5,2) |   | not null |
> > > >work_start| timestamp with time zone |   | not null |
> > > >work_end  | timestamp with time zone |   | not null |
> > > >modtime   | timestamp with time zone |   | not null | 
> > > > CURRENT_TIMES TAMP
> > > >lock  | boolean  |   |  | 
> > > > true
> > > >descrip   | character varying|   |  |
> > > > Indexes:
> > > >   "task_instance_pkey" PRIMARY KEY, btree (task_instance)
> > > >   "task_constraint" UNIQUE CONSTRAINT, btree (employee_key, 
> > > > work_type_key, pro ject_key, work_start, work_end)
> > > > Foreign-key constraints:
> > > >   "task_instance_employee_key_fkey" FOREIGN KEY (employee_key) 
> > > > REFERENCES empl :[?1l>[?1049l[local] 
> > > > stan@stan=> \d permitted_work
> > > > [?1049h[?1h=Table 
> > > > "public.permitted_work"
> > > >   Column |   Type   | Collation | Nullable |
> > > >   Default  
> > > > ---+--+---+--+--
> > > >  -
> > > >employee_key  | integer  |   | not null |
> > > >work_type_key | integer  |   | not null |
> > > >permit| boolean  |   | not null | 
> > > > false
> > > >modtime   | timestamp with time zone |   | not null | 
> > > > CURRENT_TIMES TAMP
> > > > Indexes:
> > > >   "permit_constraint" UNIQUE CONSTRAINT, btree (employee_key, 
> > > > work_type_key)
> > > > Foreign-key constraints:
> > > >   "permitted_work_employee_key_fkey" FOREIGN KEY (employee_key) 
> > > > REFERENCES emp loyee(employee_key) ON DELETE 

Re: EXPLAIN BUFFERS and I/O timing accounting questions

2019-10-24 Thread Andres Freund
Hi,

On 2019-10-21 23:18:32 -0700, Maciek Sakrejda wrote:
> I ran across an EXPLAIN plan and had some questions about some of its
> details. The BUFFERS docs say
> 
> >The number of blocks shown for an upper-level node includes those used by
> all its child nodes.
> 
> I initially assumed this would be cumulative, but I realized it's probably
> not because some of the blocks affected by each child will actually
> overlap.

Note that the buffer access stats do *not* count the number of distinct
buffers accessed, but that they purely the number of buffer
accesses.

It'd be really expensive to count the number of distinct buffers
accessed, although I guess one could make it only expensive by using
something like hyperloglog (although that will still be hard, due to
buffer replacement etc).


> But this particular plan has a Shared Hit Blocks at the root (an
> Aggregate) that is smaller than some of its children (three ModifyTables
> and a CTE Scan).

Do you have an example?  I assume what's going on is that the cost of
the CTE is actually attributed (in equal parts or something like that)
to all places using the CTE. Do the numbers add up if you just exclude
the CTE?


> This seems to contradict the documentation (since if
> children overlap fully in their buffers usage, the parent should still have
> a cost equal to the costliest child)--any idea what's up? I can send the
> whole plan (attached? inline? it's ~15kb) if that helps.

Or just relevant top-level excerpts.


> Also, a tangential question: why is the top-level structure of a JSON plan
> an array? I've only ever seen one root node with a Plan key there.

IIRC one can get multiple plans when there's a DO ALSO rule. There might
be other ways to get there too.

Greetings,

Andres Freund




Re: jsonb_set() strictness considered harmful to data

2019-10-24 Thread Tom Lane
Laurenz Albe  writes:
> On Wed, 2019-10-23 at 13:00 -0600, Stuart McGraw wrote:
>> It is less sensible with compound values where the rule can apply to
>> individual scalar components.

I agree that JSON can sensibly be viewed as a composite value, but ...

>>  And indeed that is what Postgresql does
>> for another compound type:
>> 
>> # select array_replace(array[1,2,3],2,NULL);
>> array_replace
>> ---
>> {1,NULL,3}
>> 
>> The returned value is not NULL.  Why the inconsistency between the array
>> type and json type?

... the flaw in this argument is that the array element is actually
a SQL NULL when we're done.  To do something similar in the JSON case,
we have to translate SQL NULL to JSON null, and that's cheating to
some extent.  They're not the same thing (and I'll generally resist
proposals to, say, make SELECT 'null'::json IS NULL return true).

Maybe it's okay to make this case work like that, but don't be too
high and mighty about it being logically clean; it isn't.

regards, tom lane




Re: SQL Error [0A000]: ERROR: nondeterministic collations are not supported for LIKE

2019-10-24 Thread Jeff Lanzarotta
Daniel, thanks for the reply.  I believe we will just remove the collation,
allow LIKE to function normally, and wait for a future patch is one is ever
provided.

On Thu, Oct 24, 2019 at 3:39 PM Daniel Verite 
wrote:

> Jeff Lanzarotta wrote:
>
> > I have a question about nondeterministic collations in PostgreSQL 12.  I
> > have created a new collation that is nondeterministic and created several
> > columns which use this collation.  Querying these columns works great
> until
> > I use LIKE.  When I do, I get the following error:
> >
> > SQL Error [0A000]: ERROR: nondeterministic collations are not supported
> for
> > LIKE
> >
> > Is there any plan to allow this functionality?
>
> PostgreSQL development is conducted without a roadmap [1]. Maybe
> someone will submit a patch to enable LIKE with nondeterministic
> collations, but so far it did not happen according to the current set
> of patches at https://commitfest.postgresql.org
>
> Such matches can be weirder than you might think (not to
> mention much slower).
> Consider for instance a collation that ignores punctuation:
>
> CREATE COLLATION "nd3alt" (
>   provider = 'icu',
>   locale='und@colAlternate=shifted',
>   deterministic = false
> );
>
> In the icu_ext extension, icu_strpos [2] can match a substring with
> a nondeterministic collation, which is one part of what LIKE
> would need to do for such collations. The implementation uses
> the string search facility of the ICU library.
>
> With the above-defined collation, we can have for instance:
>
> SELECT icu_strpos('abc.  ...de', 'c,d' COLLATE nd3alt);
>  icu_strpos
> 
>   3
>
> So even though 'c,d' is not a substring of 'abc.  ...de' in the common
> sense, it is recognized as such by this collation, by design.
>
> A LIKE operator for nondeterministic collations should be able to
> recognize this too, but with an arbitrary number of substrings to
> match in the pattern, plus it should handle the underscore wildcard
> in a way that hopefully makes sense.
>
> With the example above,
>'abc.  ...de' LIKE '%c,d%' COLLATE nd3alt
> should certainly be a match, but in the case of this variant:
>   'abc.  ...de' LIKE '%c_d%' COLLATE nd3alt
> it's not necessarily clear how (or even if) it should work.
>
>
> [1] https://www.postgresql.org/developer/roadmap/
> [2] https://github.com/dverite/icu_ext#icu_strpos
>
>
> Best regards,
> --
> Daniel Vérité
> PostgreSQL-powered mailer: http://www.manitou-mail.org
> Twitter: @DanielVerite
>


Re: SQL Error [0A000]: ERROR: nondeterministic collations are not supported for LIKE

2019-10-24 Thread Daniel Verite
Jeff Lanzarotta wrote:

> I have a question about nondeterministic collations in PostgreSQL 12.  I
> have created a new collation that is nondeterministic and created several
> columns which use this collation.  Querying these columns works great until
> I use LIKE.  When I do, I get the following error:
> 
> SQL Error [0A000]: ERROR: nondeterministic collations are not supported for
> LIKE
> 
> Is there any plan to allow this functionality?

PostgreSQL development is conducted without a roadmap [1]. Maybe
someone will submit a patch to enable LIKE with nondeterministic
collations, but so far it did not happen according to the current set
of patches at https://commitfest.postgresql.org

Such matches can be weirder than you might think (not to
mention much slower).
Consider for instance a collation that ignores punctuation:

CREATE COLLATION "nd3alt" (
  provider = 'icu',
  locale='und@colAlternate=shifted',
  deterministic = false
);

In the icu_ext extension, icu_strpos [2] can match a substring with
a nondeterministic collation, which is one part of what LIKE
would need to do for such collations. The implementation uses
the string search facility of the ICU library.

With the above-defined collation, we can have for instance:

SELECT icu_strpos('abc.  ...de', 'c,d' COLLATE nd3alt);
 icu_strpos 

  3

So even though 'c,d' is not a substring of 'abc.  ...de' in the common
sense, it is recognized as such by this collation, by design.

A LIKE operator for nondeterministic collations should be able to
recognize this too, but with an arbitrary number of substrings to
match in the pattern, plus it should handle the underscore wildcard
in a way that hopefully makes sense. 

With the example above,
   'abc.  ...de' LIKE '%c,d%' COLLATE nd3alt
should certainly be a match, but in the case of this variant:
  'abc.  ...de' LIKE '%c_d%' COLLATE nd3alt
it's not necessarily clear how (or even if) it should work.


[1] https://www.postgresql.org/developer/roadmap/
[2] https://github.com/dverite/icu_ext#icu_strpos


Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite




Re: jsonb_set() strictness considered harmful to data

2019-10-24 Thread Laurenz Albe
On Wed, 2019-10-23 at 13:00 -0600, Stuart McGraw wrote:
> > You can only say that if you don't understand NULL (you wouldn't be alone).
> > If I modify a JSON with an unknown value, the result is unknown.
> > This seems very intuitive to me.
> 
> Would you expect modifying an array value with an unknown would result
> in the entire array being unknown?

Hm, yes, that is less intuitive.
I was viewing a JSON as an atomic value above.

> > One could argue that whoever uses SQL should understand SQL.
> > 
> > But I believe that it is reasonable to suppose that many people who
> > use JSON in the database are more savvy with JSON than with SQL
> > (they might not have chosen JSON otherwise), so I agree that it makes
> > sense to change this particular behavior.
> 
> That (generally) SQL NULL results in NULL for any operation has been
> brought up multiple times in this thread, including above, as a rationale
> for the current jsonb behavior.  I don't think it is a valid argument.
> 
> When examples are given, they typically are with scalar values where
> such behavior makes sense: the resulting scalar value has to be NULL
> or non-NULL, it can't be both.
> 
> It is less sensible with compound values where the rule can apply to
> individual scalar components.  And indeed that is what Postgresql does
> for another compound type:
> 
># select array_replace(array[1,2,3],2,NULL);
> array_replace
>---
> {1,NULL,3}
> 
> The returned value is not NULL.  Why the inconsistency between the array
> type and json type?  Are there any cases other than json where the entire
> compound value is set to NULL as a result of one of its components being
> NULL?

That is a good point.

I agree that the behavior should be changed.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Recover databases from raw files (only exists the base directory)

2019-10-24 Thread Laurenz Albe
Carlos Martinez wrote:
> I have  been contacted for this interesting issue: on a standalone  
> postgresql 9.4
> server someone deleted all PostgreSQL files and directories  except the base 
> directory
> and its content (/var/lib/pgsql/9.4-base/data/base).
> 
> So, the question is: there is any chance/procedure to recover the databases?
> 
> As usual, the last backup has "a few" weeks. So, this dump can be used to 
> recover the recent data?

I'd cut the losses and take the backup.

You'd need a specialist to recover anything there.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Lookup Primary Key of Foreign Server's Table

2019-10-24 Thread Chris Morris
Yeah. Part of my confusion is due to Heroku providing a Data Links service
that handles a lot of the internal details around establishing a
foreign server connection, and not knowing exactly what to expect.

I experimented with IMPORT FOREIGN SCHEMA in a couple of test databases and
noticed that there was no declaration of primary key brought over to the
local database. So, this is something ActiveRecord simply won't be able to
handle right now, and I have to take care of that myself, which is easy
enough to do.

Thx for the feedback.

On Wed, Oct 23, 2019 at 11:12 AM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wed, Oct 23, 2019 at 8:47 AM Chris Morris 
> wrote:
>
>> The foreign table has a primary key. Ruby on Rails uses a system query to
>> lookup what the primary key on the table is, but it's querying the local
>> database, and not finding anything. In a schema dump of the local database,
>> I don't see a primary key defined, so I'm presuming I need to issue an ADD
>> CONSTRAINT command against the foreign table to reflect what is actually
>> true in the foreign table? Is that correct?
>>
>
> The documentation says doing what you suggest won't work...
>
> David J.
>
>


Recover databases from raw files (only exists the base directory)

2019-10-24 Thread Carlos Martinez
Hi.

I have  been contacted for this interesting issue: on a standalone
 postgresql 9.4 server someone deleted all PostgreSQL files and
directories  except the base directory and its content
(/var/lib/pgsql/9.4-base/data/base).

So, the question is: there is any chance/procedure to recover the databases?

As usual, the last backup has "a few" weeks. So, this dump can be used to
recover the recent data?

Thanks a lot for you help.

Carlos Martínez.


Re: Search path

2019-10-24 Thread Adrian Klaver

On 10/24/19 9:41 AM, stan wrote:

I just was educated on the security issues of search path. As a result
I am going to define a schema for the project we are working on.
I set this in  postgresql.conf

search_path = 'ica , "$user", public'

Here is the question. Will this path be in effect for users connecting from
MS Access clients? The name of this file makes me think that it only
controls instances of psql, but I need this to be effective for all
connections to the database.



It is under:

Client Connection Defaults
https://www.postgresql.org/docs/11/runtime-config-client.html

so it works for clients that connect to Postgres not just psql. I can 
confirm it works with Access.



--
Adrian Klaver
adrian.kla...@aklaver.com




Search path

2019-10-24 Thread stan
I just was educated on the security issues of search path. As a result
I am going to define a schema for the project we are working on.
I set this in  postgresql.conf

search_path = 'ica , "$user", public'

Here is the question. Will this path be in effect for users connecting from
MS Access clients? The name of this file makes me think that it only
controls instances of psql, but I need this to be effective for all
connections to the database.

-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Re: Having more than one constraint trigger on a table

2019-10-24 Thread Andreas Joseph Krogh


På torsdag 24. oktober 2019 kl. 16:59:42, skrev Adrian Klaver <
adrian.kla...@aklaver.com >: 
On 10/22/19 8:26 AM, Andreas Joseph Krogh wrote:
 > På tirsdag 22. oktober 2019 kl. 17:12:59, skrev Adrian Klaver
 > mailto:adrian.kla...@aklaver.com>>:
 >
 > [snip]
 > No.
 > When I sort the triggers I get:
 >
 > test=# create table trg_str(fld_1 varchar);
 > CREATE TABLE
 > test=# insert into trg_str values ('trigger_1_update_fts'),
 > ('trigger_2'), ('trigger_3'), ('trigger_1_check_nocycle');
 > INSERT 0 4
 > test=# select * from trg_test order by fld_1 ;
 > id | fld_1
 > +---
 > (0 rows)
 >
 > test=# select * from trg_str order by fld_1 ;
 > fld_1
 > -
 > trigger_1_check_nocycle
 > trigger_1_update_fts
 > trigger_2
 > trigger_3
 >
 > Is this how you want them to fire as it does not match what you say
 > above?:
 >
 > (I know they were not /declared/ in that order, but..)
 > Yes, all "trigger_1_*" are the "actuall triggers triggering the logic",
 > trigger_2 and trigger_3 are only there as part of the "make
 > constraint-triggers fire only once"-mechanism, in which the function in
 > the first trigger is the function performing the actual logic.
 > So, being I want 2 "logical chunks" to happen I have two
 > "trigger_1"-triggers (there is no established terminilogy for this
 > AFAIK), each calling a function performing the logick which is to happen
 > only once (per row).
 >
 > "The first "main" trigger-function is update_company_fts_tf() ... The
 > second "main" trigger-function is company_parent_no_cycle()"
 >
 > It might be easier to understand if sketch out a schematic version of
 > what you are trying to achieve.
 >
 > The point is; I want to functions to be called
 > - update_company_fts_tf()
 > - company_parent_no_cycle()
 > , each only once, as constraint-triggers on the same table. So they are
 > called by the "level 1 triggers" which must fire first.

 To be clear the order they fire relative to each other is not important? 


Correct, these main functions may fire in any order. 



> Is it clearer now what I'm trying to achieve?

 Sort of, though I am still not entirely what the whole process is trying
 to achieve. What the mix of deferred and un-deferred triggers and
 'logical' and housekeeping functions are doing is not clear to me. That
 is why I suggested a schematic representation of the trigger flow would
 be helpful. Leave out the fine details and create a flow chart of what
 you want to happen. 


Normally, CONSTRAINT TRIGGERs will fire once for each UPDATE. That means, if 
you do CREATE, the 2 UPDATES, then the trigger(s) will fire 3 times for each 
row. I'm trying to make these triggers fire only ONCE per row, and at COMMIT 
(being CONSTRAINT TRIGGER). 

I'm using the trick mentioned here to achieve this: 

https://stackoverflow.com/questions/8937203/execute-deferred-trigger-only-once-per-row-in-postgresql#answer-8954058
 



But I'm trying to have more than one CONSTRAINT TRIGGER on the same table, 
each one doing dirfferent things and reacting (triggering) on different 
columns, and I'm wondering if I can "re-use" the "cleanup-triggers" 2 and 3 as 
I mentioned, having trigger 2 firing on the sum of all involved COLUMNS (name, 
duns_number, parent_id) ? 

-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com  
www.visena.com  
 

SQL Error [0A000]: ERROR: nondeterministic collations are not supported for LIKE

2019-10-24 Thread Jeff Lanzarotta
Hello,

Not sure if my original email was actually received or not.  If it was,
forgive the repost...

I have a question about nondeterministic collations in PostgreSQL 12.  I
have created a new collation that is nondeterministic and created several
columns which use this collation.  Querying these columns works great until
I use LIKE.  When I do, I get the following error:

SQL Error [0A000]: ERROR: nondeterministic collations are not supported for
LIKE

Is there any plan to allow this functionality?

Thanks,


Re: Having more than one constraint trigger on a table

2019-10-24 Thread Adrian Klaver

On 10/22/19 8:26 AM, Andreas Joseph Krogh wrote:
På tirsdag 22. oktober 2019 kl. 17:12:59, skrev Adrian Klaver 
mailto:adrian.kla...@aklaver.com>>:


[snip]
No.
When I sort the triggers I get:

test=# create table trg_str(fld_1 varchar);
CREATE TABLE
test=# insert into trg_str values ('trigger_1_update_fts'),
('trigger_2'), ('trigger_3'), ('trigger_1_check_nocycle');
INSERT 0 4
test=# select * from trg_test order by fld_1 ;
   id | fld_1
+---
(0 rows)

test=# select * from trg_str order by fld_1 ;
            fld_1
-
   trigger_1_check_nocycle
   trigger_1_update_fts
   trigger_2
   trigger_3

Is this how you want them to fire as it does not match what you say
above?:

(I know they were not /declared/ in that order, but..)
Yes, all "trigger_1_*" are the "actuall triggers triggering the logic", 
trigger_2 and trigger_3 are only there as part of the "make 
constraint-triggers fire only once"-mechanism, in which the function in 
the first trigger is the function performing the actual logic.
So, being I want 2 "logical chunks" to happen I have two 
"trigger_1"-triggers (there is no established terminilogy for this 
AFAIK), each calling a function performing the logick which is to happen 
only once (per row).


"The first "main" trigger-function is update_company_fts_tf() ... The
second "main" trigger-function is company_parent_no_cycle()"

It might be easier to understand if sketch out a schematic version of
what you are trying to achieve.

The point is; I want to functions to be called
- update_company_fts_tf()
- company_parent_no_cycle()
, each only once, as constraint-triggers on the same table. So they are 
called by the "level 1 triggers" which must fire first.


To be clear the order they fire relative to each other is not important?


Is it clearer now what I'm trying to achieve?


Sort of, though I am still not entirely what the whole process is trying 
to achieve. What the mix of deferred and un-deferred triggers and 
'logical' and housekeeping functions are doing is not clear to me. That 
is why I suggested a schematic representation of the trigger flow would 
be helpful. Leave out the fine details and create a flow chart of what 
you want to happen.



--
Andreas Joseph Krogh



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: LocalTransactionId vs txid_current

2019-10-24 Thread Tom Lane
Luca Ferrari  writes:
> I'm exploring DTrace, and I thought that LocalTransactionId as
> described in 
> would have a relationship with txid_current(),

It does not.  txid_current returns a globally valid transaction ID,
while local transaction IDs are just taken from a counter within
the session and have no meaning to other sessions.

regards, tom lane




Re: A very puzzling backup/restore problem

2019-10-24 Thread Adrian Klaver

On 10/24/19 7:32 AM, stan wrote:

On Thu, Oct 24, 2019 at 07:04:11AM -0700, Adrian Klaver wrote:

On 10/24/19 3:52 AM, stan wrote:



I have a very confusing isse. I am trying to backup and restre a signle
table .

first I dump the table.


Actually you are just dumping the table data.

More below.



Script started on 2019-10-24 06:29:12-0400
]0;stan@smokey: ~stan@smokey:~$ pg_dump -a --table task_instance  > 
task_instance.dump

Then I connect to the db, and verify that things are as expected.

]0;stan@smokey: ~stan@smokey:~$ psql
psql (11.5 (Ubuntu 11.5-3.pgdg18.04+1))
Type "help" for help.

[local] stan@stan=> \dt
   List of relations
   Schema |   Name   | Type  | Owner
+--+---+---
   public | biz_constants| table | stan
   public | bom_item | table | stan
   public | costing_unit | table | stan
   public | customer | table | stan
   public | earthquake   | table | stan
   public | employee | table | stan
   public | expense_report_instance  | table | stan
   public | gl_code  | table | stan
   public | mfg  | table | stan
   public | mfg_part | table | stan
   public | mfg_vendor_relationship  | table | stan
   public | permitted_work   | table | stan
   public | phone_number_test| table | stan
   public | project  | table | stan
   public | project_budget_component | table | stan
   public | project_cost_category| table | stan
   public | rate | table | stan
   public | salary   | table | stan
   public | task_instance| table | stan
   public | vendor   | table | stan
   public | work_type| table | stan
(21 rows)

[local] stan@stan=> \d task_instance
[?1049h[?1h=  Table 
"public.task_instance"
  Column |   Type   | Collation | Nullable |
     Default
---+--+---+--+--
 -
   task_instance | integer  |   | not null | 
nextval('task _instance_key_serial'::regclass)
   project_key   | integer  |   | not null |
   employee_key  | integer  |   | not null |
   work_type_key | integer  |   | not null |
   hours | numeric(5,2) |   | not null |
   work_start| timestamp with time zone |   | not null |
   work_end  | timestamp with time zone |   | not null |
   modtime   | timestamp with time zone |   | not null | 
CURRENT_TIMES TAMP
   lock  | boolean  |   |  | true
   descrip   | character varying|   |  |
Indexes:
  "task_instance_pkey" PRIMARY KEY, btree (task_instance)
  "task_constraint" UNIQUE CONSTRAINT, btree (employee_key, work_type_key, 
pro ject_key, work_start, work_end)
Foreign-key constraints:
  "task_instance_employee_key_fkey" FOREIGN KEY (employee_key) REFERENCES empl 
:[?1l>[?1049l[local] stan@stan=> \d permitted_work
[?1049h[?1h=Table 
"public.permitted_work"
  Column |   Type   | Collation | Nullable |  
Default  
---+--+---+--+--
 -
   employee_key  | integer  |   | not null |
   work_type_key | integer  |   | not null |
   permit| boolean  |   | not null | false
   modtime   | timestamp with time zone |   | not null | 
CURRENT_TIMES TAMP
Indexes:
  "permit_constraint" UNIQUE CONSTRAINT, btree (employee_key, work_type_key)
Foreign-key constraints:
  "permitted_work_employee_key_fkey" FOREIGN KEY (employee_key) REFERENCES 
emp loyee(employee_key) ON DELETE RESTRICT
  "permitted_work_work_type_key_fkey" FOREIGN KEY (work_type_key) 
REFERENCES w ork_type(work_type_key) ON DELETE RESTRICT

Then I delete the rows from the table.

(END)[?1l>[?1049l[local] stan@stan=> delete 
from task_instance ;
DELETE 31

Then I try to restore from the dump file.

[local] stan@stan=> \i task_instance.dump
SET
SET
SET
SET
SET
   set_config

(1 row)

SET
SET
SET
SET
psql:task_instance.dump:55: ERROR:  relation "permitted_work" does not exist
LINE 3: permitted_work
  ^


In your  \d task_instance above I do not see a trigger that calls
public.check_permission(). Does one exist or was it cut off the output you
pasted?


It exists. Perhaps I am using a different /dt format? I am not accustomed

Re: A very puzzling backup/restore problem

2019-10-24 Thread stan
On Thu, Oct 24, 2019 at 07:04:11AM -0700, Adrian Klaver wrote:
> On 10/24/19 3:52 AM, stan wrote:
> > 
> > 
> > I have a very confusing isse. I am trying to backup and restre a signle
> > table .
> > 
> > first I dump the table.
> 
> Actually you are just dumping the table data.
> 
> More below.
> > 
> > 
> > Script started on 2019-10-24 06:29:12-0400
> > ]0;stan@smokey: ~stan@smokey:~$ pg_dump -a --table task_instance  > 
> > task_instance.dump
> > 
> > Then I connect to the db, and verify that things are as expected.
> > 
> > ]0;stan@smokey: ~stan@smokey:~$ psql
> > psql (11.5 (Ubuntu 11.5-3.pgdg18.04+1))
> > Type "help" for help.
> > 
> > [local] stan@stan=> \dt
> >   List of relations
> >   Schema |   Name   | Type  | Owner
> > +--+---+---
> >   public | biz_constants| table | stan
> >   public | bom_item | table | stan
> >   public | costing_unit | table | stan
> >   public | customer | table | stan
> >   public | earthquake   | table | stan
> >   public | employee | table | stan
> >   public | expense_report_instance  | table | stan
> >   public | gl_code  | table | stan
> >   public | mfg  | table | stan
> >   public | mfg_part | table | stan
> >   public | mfg_vendor_relationship  | table | stan
> >   public | permitted_work   | table | stan
> >   public | phone_number_test| table | stan
> >   public | project  | table | stan
> >   public | project_budget_component | table | stan
> >   public | project_cost_category| table | stan
> >   public | rate | table | stan
> >   public | salary   | table | stan
> >   public | task_instance| table | stan
> >   public | vendor   | table | stan
> >   public | work_type| table | stan
> > (21 rows)
> > 
> > [local] stan@stan=> \d task_instance
> > [?1049h[?1h=  Table 
> > "public.task_instance"
> >  Column |   Type   | Collation | Nullable | 
> >     Default
> > ---+--+---+--+--
> >  -
> >   task_instance | integer  |   | not null | 
> > nextval('task _instance_key_serial'::regclass)
> >   project_key   | integer  |   | not null |
> >   employee_key  | integer  |   | not null |
> >   work_type_key | integer  |   | not null |
> >   hours | numeric(5,2) |   | not null |
> >   work_start| timestamp with time zone |   | not null |
> >   work_end  | timestamp with time zone |   | not null |
> >   modtime   | timestamp with time zone |   | not null | 
> > CURRENT_TIMES TAMP
> >   lock  | boolean  |   |  | true
> >   descrip   | character varying|   |  |
> > Indexes:
> >  "task_instance_pkey" PRIMARY KEY, btree (task_instance)
> >  "task_constraint" UNIQUE CONSTRAINT, btree (employee_key, 
> > work_type_key, pro ject_key, work_start, work_end)
> > Foreign-key constraints:
> >  "task_instance_employee_key_fkey" FOREIGN KEY (employee_key) 
> > REFERENCES empl :[?1l>[?1049l[local] stan@stan=> 
> > \d permitted_work
> > [?1049h[?1h=Table 
> > "public.permitted_work"
> >  Column |   Type   | Collation | Nullable |  
> > Default  
> > ---+--+---+--+--
> >  -
> >   employee_key  | integer  |   | not null |
> >   work_type_key | integer  |   | not null |
> >   permit| boolean  |   | not null | false
> >   modtime   | timestamp with time zone |   | not null | 
> > CURRENT_TIMES TAMP
> > Indexes:
> >  "permit_constraint" UNIQUE CONSTRAINT, btree (employee_key, 
> > work_type_key)
> > Foreign-key constraints:
> >  "permitted_work_employee_key_fkey" FOREIGN KEY (employee_key) 
> > REFERENCES emp loyee(employee_key) ON DELETE RESTRICT
> >  "permitted_work_work_type_key_fkey" FOREIGN KEY (work_type_key) 
> > REFERENCES w ork_type(work_type_key) ON DELETE RESTRICT
> > 
> > Then I delete the rows from the table.
> > 
> > (END)[?1l>[?1049l[local] stan@stan=> 
> > delete from task_instance ;
> > DELETE 31
> > 
> > Then I try to restore from the dump file.
> > 
> > [local] stan@stan=> \i task_instance.dump
> > SET
> > SET
> > SET
> > SET
> > SET
> >   set_config
> > 
> > (1 row)
> > 
> > SET
> > SET
> > 

Re: PGPool version 4.0.6-1

2019-10-24 Thread Adrian Klaver

On 10/24/19 6:10 AM, Vikas Sharma wrote:

Hi All,

I have a confusion about the Pgpool -ii version.

Can you let the know the difference between pgpool-II-pg11-4.0.6-1 and 
pgpool-II-96-4.0.6-1 ?


To me, it looks like it is the same version of pgpool-II i.e. 4.0.6 but 
compiled with postgresql 11 or postgresql 9.6. otherwise they both have 
the same features...


https://www.pgpool.net/mediawiki/index.php/Downloads
"pgpool-II

pgpool-II-pg93-3.4.0-2pgdg.rhel6.x86_64.rpm etc. is the main 
pgpool-II rpm and should be installed into pgpool-II server. 
pgpool-II-pg93-extensions-3.4.0-2pgdg.rhel6.x86_64.rpm etc. is 
PostgreSQL extension and should be installed into PostgreSQL server. 
Please use suitable one depending on PostgreSQL version presented by 
"-pgXX"."




Regards
Vikas



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: A very puzzling backup/restore problem

2019-10-24 Thread Adrian Klaver

On 10/24/19 3:52 AM, stan wrote:



I have a very confusing isse. I am trying to backup and restre a signle
table .

first I dump the table.


Actually you are just dumping the table data.

More below.



Script started on 2019-10-24 06:29:12-0400
]0;stan@smokey: ~stan@smokey:~$ pg_dump -a --table task_instance  > 
task_instance.dump

Then I connect to the db, and verify that things are as expected.

]0;stan@smokey: ~stan@smokey:~$ psql
psql (11.5 (Ubuntu 11.5-3.pgdg18.04+1))
Type "help" for help.

[local] stan@stan=> \dt
  List of relations
  Schema |   Name   | Type  | Owner
+--+---+---
  public | biz_constants| table | stan
  public | bom_item | table | stan
  public | costing_unit | table | stan
  public | customer | table | stan
  public | earthquake   | table | stan
  public | employee | table | stan
  public | expense_report_instance  | table | stan
  public | gl_code  | table | stan
  public | mfg  | table | stan
  public | mfg_part | table | stan
  public | mfg_vendor_relationship  | table | stan
  public | permitted_work   | table | stan
  public | phone_number_test| table | stan
  public | project  | table | stan
  public | project_budget_component | table | stan
  public | project_cost_category| table | stan
  public | rate | table | stan
  public | salary   | table | stan
  public | task_instance| table | stan
  public | vendor   | table | stan
  public | work_type| table | stan
(21 rows)

[local] stan@stan=> \d task_instance
[?1049h[?1h=  Table 
"public.task_instance"
 Column |   Type   | Collation | Nullable | 
    Default
---+--+---+--+--
 -
  task_instance | integer  |   | not null | 
nextval('task _instance_key_serial'::regclass)
  project_key   | integer  |   | not null |
  employee_key  | integer  |   | not null |
  work_type_key | integer  |   | not null |
  hours | numeric(5,2) |   | not null |
  work_start| timestamp with time zone |   | not null |
  work_end  | timestamp with time zone |   | not null |
  modtime   | timestamp with time zone |   | not null | 
CURRENT_TIMES TAMP
  lock  | boolean  |   |  | true
  descrip   | character varying|   |  |
Indexes:
 "task_instance_pkey" PRIMARY KEY, btree (task_instance)
 "task_constraint" UNIQUE CONSTRAINT, btree (employee_key, work_type_key, 
pro ject_key, work_start, work_end)
Foreign-key constraints:
 "task_instance_employee_key_fkey" FOREIGN KEY (employee_key) REFERENCES empl 
:[?1l>[?1049l[local] stan@stan=> \d permitted_work
[?1049h[?1h=Table 
"public.permitted_work"
 Column |   Type   | Collation | Nullable |  
Default  
---+--+---+--+--
 -
  employee_key  | integer  |   | not null |
  work_type_key | integer  |   | not null |
  permit| boolean  |   | not null | false
  modtime   | timestamp with time zone |   | not null | 
CURRENT_TIMES TAMP
Indexes:
 "permit_constraint" UNIQUE CONSTRAINT, btree (employee_key, work_type_key)
Foreign-key constraints:
 "permitted_work_employee_key_fkey" FOREIGN KEY (employee_key) REFERENCES 
emp loyee(employee_key) ON DELETE RESTRICT
 "permitted_work_work_type_key_fkey" FOREIGN KEY (work_type_key) REFERENCES 
w ork_type(work_type_key) ON DELETE RESTRICT

Then I delete the rows from the table.

(END)[?1l>[?1049l[local] stan@stan=> delete 
from task_instance ;
DELETE 31

Then I try to restore from the dump file.

[local] stan@stan=> \i task_instance.dump
SET
SET
SET
SET
SET
  set_config

  
(1 row)


SET
SET
SET
SET
psql:task_instance.dump:55: ERROR:  relation "permitted_work" does not exist
LINE 3: permitted_work
 ^


In your  \d task_instance above I do not see a trigger that calls 
public.check_permission(). Does one exist or was it cut off the output 
you pasted?


Also look in the dump file. Given that you are using 11.5 I'm going to 
assume it is resetting the search_path and that the unqualified  schema 
name of permitted_work below is your issue.



QUERY:  SELECT
   

Re: A very puzzling backup/restore problem

2019-10-24 Thread stan
On Thu, Oct 24, 2019 at 07:57:14AM -0500, Ron wrote:
> Why did you do "\i task_instance.dump" instead of "pg_restore"?

Ah, that may be the root of my problem. I had in the back of my mind that
the result  of a pg_dump was a free standing SQL script, but that my be
exactly what is going on. I will try to use pg_restore, and make certain
that resolves my issue.

In the meantime, I found that if I deleted all the SET lines from the dump
I could restore with psql. But that is probably not the right way to do
this.

> 
> On 10/24/19 5:52 AM, stan wrote:
> > 
> > I have a very confusing isse. I am trying to backup and restre a signle
> > table .
> > 
> > first I dump the table.
> > 
> > 
> > Script started on 2019-10-24 06:29:12-0400
> > ]0;stan@smokey: ~stan@smokey:~$ pg_dump -a --table task_instance  > 
> > task_instance.dump
> > 
> > Then I connect to the db, and verify that things are as expected.
> > 
> > ]0;stan@smokey: ~stan@smokey:~$ psql
> > psql (11.5 (Ubuntu 11.5-3.pgdg18.04+1))
> > Type "help" for help.
> > 
> > [local] stan@stan=> \dt
> >   List of relations
> >   Schema |   Name   | Type  | Owner
> > +--+---+---
> >   public | biz_constants| table | stan
> >   public | bom_item | table | stan
> >   public | costing_unit | table | stan
> >   public | customer | table | stan
> >   public | earthquake   | table | stan
> >   public | employee | table | stan
> >   public | expense_report_instance  | table | stan
> >   public | gl_code  | table | stan
> >   public | mfg  | table | stan
> >   public | mfg_part | table | stan
> >   public | mfg_vendor_relationship  | table | stan
> >   public | permitted_work   | table | stan
> >   public | phone_number_test| table | stan
> >   public | project  | table | stan
> >   public | project_budget_component | table | stan
> >   public | project_cost_category| table | stan
> >   public | rate | table | stan
> >   public | salary   | table | stan
> >   public | task_instance| table | stan
> >   public | vendor   | table | stan
> >   public | work_type| table | stan
> > (21 rows)
> > 
> > [local] stan@stan=> \d task_instance
> > [?1049h[?1h=  Table 
> > "public.task_instance"
> >  Column |   Type   | Collation | Nullable | 
> >     Default
> > ---+--+---+--+--
> >  -
> >   task_instance | integer  |   | not null | 
> > nextval('task _instance_key_serial'::regclass)
> >   project_key   | integer  |   | not null |
> >   employee_key  | integer  |   | not null |
> >   work_type_key | integer  |   | not null |
> >   hours | numeric(5,2) |   | not null |
> >   work_start| timestamp with time zone |   | not null |
> >   work_end  | timestamp with time zone |   | not null |
> >   modtime   | timestamp with time zone |   | not null | 
> > CURRENT_TIMES TAMP
> >   lock  | boolean  |   |  | true
> >   descrip   | character varying|   |  |
> > Indexes:
> >  "task_instance_pkey" PRIMARY KEY, btree (task_instance)
> >  "task_constraint" UNIQUE CONSTRAINT, btree (employee_key, 
> > work_type_key, pro ject_key, work_start, work_end)
> > Foreign-key constraints:
> >  "task_instance_employee_key_fkey" FOREIGN KEY (employee_key) 
> > REFERENCES empl :[?1l>[?1049l[local] stan@stan=> 
> > \d permitted_work
> > [?1049h[?1h=Table 
> > "public.permitted_work"
> >  Column |   Type   | Collation | Nullable |  
> > Default  
> > ---+--+---+--+--
> >  -
> >   employee_key  | integer  |   | not null |
> >   work_type_key | integer  |   | not null |
> >   permit| boolean  |   | not null | false
> >   modtime   | timestamp with time zone |   | not null | 
> > CURRENT_TIMES TAMP
> > Indexes:
> >  "permit_constraint" UNIQUE CONSTRAINT, btree (employee_key, 
> > work_type_key)
> > Foreign-key constraints:
> >  "permitted_work_employee_key_fkey" FOREIGN KEY (employee_key) 
> > REFERENCES emp loyee(employee_key) ON DELETE RESTRICT
> >  "permitted_work_work_type_key_fkey" FOREIGN KEY (work_type_key) 
> > REFERENCES w ork_type(work_type_key) ON DELETE RESTRICT
> > 
> > 

LocalTransactionId vs txid_current

2019-10-24 Thread Luca Ferrari
I'm exploring DTrace, and I thought that LocalTransactionId as
described in 
would have a relationship with txid_current(), but apparently I'm not
getting it:

template1=# begin;
template1=# select txid_current();
 txid_current
--
  488
template1=# commit;

but DTrace shows me:

% sudo ./postgresql.d 33381
[TRANSACTION START] 28  postgres (33381) called StartTransaction
[TRANSACTION COMMIT] 28  postgres (33381) called CommitTransaction

with my script being:

{
  printf("[TRANSACTION START] %d  %s (%d) called %s\n", args[0],
execname, pid, probefunc);
  @tx["begin"] = count();
  self->tx_ts  = timestamp;
}


postgresql$1:::transaction-commit
/self->tx_ts/
{
  printf("[TRANSACTION COMMIT] %d  %s (%d) called %s\n", args[0],
execname, pid, probefunc);
  @tx["commit"] = count();
  @tx_secs["commit"] = quantize( ( timestamp - self->tx_ts ) / 1000 );
  self->tx_ts   = 0;
}


The args[0] is continuously incremented each time a new transaction is
started, even if it is not supposed to get a non-virtual txid:

template1=# begin;
template1=# rollback;

shows:

% sudo ./postgresql.d 33381
[TRANSACTION START] 29  postgres (33381) called StartTransaction

If i restart the server the args[0] starts from 4, that reminds me
about the min xid available.
Any way to get from LocalTransactionId to txid_current (if assigned)?

Thanks,
Luca




PGPool version 4.0.6-1

2019-10-24 Thread Vikas Sharma
Hi All,

I have a confusion about the Pgpool -ii version.

Can you let the know the difference between pgpool-II-pg11-4.0.6-1 and
pgpool-II-96-4.0.6-1
?

To me, it looks like it is the same version of pgpool-II i.e. 4.0.6 but
compiled with postgresql 11 or postgresql 9.6. otherwise they both have the
same features...

Regards
Vikas


A very puzzling backup/restore problem

2019-10-24 Thread stan



I have a very confusing isse. I am trying to backup and restre a signle
table .

first I dump the table.


Script started on 2019-10-24 06:29:12-0400
]0;stan@smokey: ~stan@smokey:~$ pg_dump -a --table task_instance  > 
task_instance.dump

Then I connect to the db, and verify that things are as expected.

]0;stan@smokey: ~stan@smokey:~$ psql
psql (11.5 (Ubuntu 11.5-3.pgdg18.04+1))
Type "help" for help.

[local] stan@stan=> \dt
 List of relations
 Schema |   Name   | Type  | Owner 
+--+---+---
 public | biz_constants| table | stan
 public | bom_item | table | stan
 public | costing_unit | table | stan
 public | customer | table | stan
 public | earthquake   | table | stan
 public | employee | table | stan
 public | expense_report_instance  | table | stan
 public | gl_code  | table | stan
 public | mfg  | table | stan
 public | mfg_part | table | stan
 public | mfg_vendor_relationship  | table | stan
 public | permitted_work   | table | stan
 public | phone_number_test| table | stan
 public | project  | table | stan
 public | project_budget_component | table | stan
 public | project_cost_category| table | stan
 public | rate | table | stan
 public | salary   | table | stan
 public | task_instance| table | stan
 public | vendor   | table | stan
 public | work_type| table | stan
(21 rows)

[local] stan@stan=> \d task_instance
[?1049h[?1h=  Table 
"public.task_instance"
Column |   Type   | Collation | Nullable |  
   Default
---+--+---+--+--
 -
 task_instance | integer  |   | not null | 
nextval('task _instance_key_serial'::regclass)
 project_key   | integer  |   | not null | 
 employee_key  | integer  |   | not null | 
 work_type_key | integer  |   | not null | 
 hours | numeric(5,2) |   | not null | 
 work_start| timestamp with time zone |   | not null | 
 work_end  | timestamp with time zone |   | not null | 
 modtime   | timestamp with time zone |   | not null | 
CURRENT_TIMES TAMP
 lock  | boolean  |   |  | true
 descrip   | character varying|   |  | 
Indexes:
"task_instance_pkey" PRIMARY KEY, btree (task_instance)
"task_constraint" UNIQUE CONSTRAINT, btree (employee_key, work_type_key, 
pro ject_key, work_start, work_end)
Foreign-key constraints:
"task_instance_employee_key_fkey" FOREIGN KEY (employee_key) REFERENCES 
empl :[?1l>[?1049l[local] stan@stan=> \d 
permitted_work
[?1049h[?1h=Table 
"public.permitted_work"
Column |   Type   | Collation | Nullable |  Default 
  
---+--+---+--+--
 -
 employee_key  | integer  |   | not null | 
 work_type_key | integer  |   | not null | 
 permit| boolean  |   | not null | false
 modtime   | timestamp with time zone |   | not null | 
CURRENT_TIMES TAMP
Indexes:
"permit_constraint" UNIQUE CONSTRAINT, btree (employee_key, work_type_key)
Foreign-key constraints:
"permitted_work_employee_key_fkey" FOREIGN KEY (employee_key) REFERENCES 
emp loyee(employee_key) ON DELETE RESTRICT
"permitted_work_work_type_key_fkey" FOREIGN KEY (work_type_key) REFERENCES 
w ork_type(work_type_key) ON DELETE RESTRICT

Then I delete the rows from the table.

(END)[?1l>[?1049l[local] stan@stan=> delete 
from task_instance ;
DELETE 31

Then I try to restore from the dump file.

[local] stan@stan=> \i task_instance.dump
SET
SET
SET
SET
SET
 set_config 

 
(1 row)

SET
SET
SET
SET
psql:task_instance.dump:55: ERROR:  relation "permitted_work" does not exist
LINE 3: permitted_work
^
QUERY:  SELECT
permit  FROM
permitted_work
WHERE
NEW.employee_key = permitted_work.employee_key
AND
NEW.work_type_key = permitted_work.work_type_key
CONTEXT:  PL/pgSQL function public.check_permission() line 4 at SQL statement
COPY task_instance, line 1: "1  1   1   8   17.50   2019-02-01 
00:00:00-05  2019-02-08 00:00:00-05  2019-10-24 06:28:44.502699-04   t

Re: Composite type storage overhead

2019-10-24 Thread Laiszner Tamás
Actually, this is not such a unique idea: 
https://instagram-engineering.com/sharding-ids-at-instagram-1cf5a71e5a5c

Thanks for the suggestion to split up the primary key into components. But even 
going down this way, packing the components into one superstructure (composite 
type) would be beneficial as the same scheme is used across multiple tables. 
And we are back at the original problem.

Feladó: Thomas Kellerer 
Elküldve: 2019. október 24., csütörtök 8:19
Címzett: pgsql-general@lists.postgresql.org 
Tárgy: Re: Composite type storage overhead

> 3. The value is logically defined as a 128-bit integer, that is in
> itself a compound value split into a few "bit groups". Extracting
> these parts can be done by simple (and supposedly efficient) bitwise
> operators when stored as integer, but becomes much more cumbersome
> with UUID, I guess.

This is usually a bad idea.

Putting logic into the primary key value and merging different types of 
information in a single column is typically not such a good idea.
(And it violates first normal form to begin with)

I would strongly recommend to revisit this idea, and e.g. think about a 
multi-column primary key instead. Where each of these "groups" are stored in a 
separate column where the actual (business) value can be retrieved without any 
bitshifting or similar operations.

Thomas








Failed to increase the restart LSN of PostgreSQL logical replication slot

2019-10-24 Thread Rashmi V Bharadwaj
Hi,When I tried to update the flush LSN position of the logical replication slot for my 11.3 database, using the command select pg_replication_slot_advance(, )I get the error:user=cdcpsqlsrc,db=db_dsn_test03,app=PostgreSQL JDBC Driver,client=172.24.42.236 DEBUG: failed to increase restart lsn: proposed 0/B05EE120, after 0/D0664DC8, current candidate 0/A5180928, current after 0/A5180928, flushed up to 0/A5119080Could you please explain the reason for this error and help resolve it?Thanks,Rashmi





Re: Composite type storage overhead

2019-10-24 Thread Thomas Kellerer
> 3. The value is logically defined as a 128-bit integer, that is in
> itself a compound value split into a few "bit groups". Extracting
> these parts can be done by simple (and supposedly efficient) bitwise
> operators when stored as integer, but becomes much more cumbersome
> with UUID, I guess.

This is usually a bad idea. 

Putting logic into the primary key value and merging different types of 
information in a single column is typically not such a good idea. 
(And it violates first normal form to begin with) 

I would strongly recommend to revisit this idea, and e.g. think about a 
multi-column primary key instead. Where each of these "groups" are stored in a 
separate column where the actual (business) value can be retrieved without any 
bitshifting or similar operations. 

Thomas