Re: How to remove user specific grant and revoke

2023-06-04 Thread Andrus

Hi!



>From your first message I was under the impression that pgAdmin shows one
REVOKE ALL for every GRANT, i.e. all REVOKE commands at once.  If that is not
the case you may have found a bug in pgAdmin.  Please ask on the pgadmin-support
list or open a GitHub issue.

Speaking of which, I foundhttps://github.com/pgadmin-org/pgadmin4/issues/5926
which looks like the behavior you're describing.  But this was already fixed in
7.2 and your original post says that you're using 7.2.  Please check if your
version is correct.


In pgAdmin 7.2  right clicking in table name and selecting Refresh does 
nothing.


Smells like a bug.

Right clicking in Tables and selecting Refresh  worked.

Andrus.


Re: How to remove user specific grant and revoke

2023-06-03 Thread Andrus

Hi!


Should I ran separate revoke commands for every user to remove those 
revokes ?


How to remove user-spefic grants ? 


After running revoke commands in psql, GRANT commands disappeared 
magically. It looks like pgAdmin does not allow execute REVOKO commands.


After running script which adds user group tabel modification rights for 
admin users:


CREATE POLICY kaspriv_sel_policy ON kaspriv FOR SELECT USING (true);
CREATE POLICY kaspriv_mod_policy ON kaspriv USING (
  lower(kasutaja)= current_user OR kasutaja in
    ( select kasutaja from kasutaja where ','||firmad||','
    LIKE '%,'|| (select firmad from kasutaja where lower(kasutaja)= 
current_user) || ',%'

    )
);
ALTER TABLE kaspriv  ENABLE ROW LEVEL SECURITY;
revoke all on kaspriv from public;
grant select on kaspriv to public;
grant insert, update, delete on kaspriv to admin1, admin2;

pgAdmin shows revoke commands for those users:

REVOKE ALL ON TABLE public.kaspriv FROM admin1;
REVOKE ALL ON TABLE public.kaspriv FROM admin2;

How to prevent pgAdmin to show those revokes?

Andrus.



Re: How to remove user specific grant and revoke

2023-06-03 Thread Andrus

Hi!



After this command pgAdmin still shows revoke and grant commands for
alekspoluh role.
How to remove all grant and revoke assignments for role ?

Please confirm that \dp public.kaspriv no longer shows an ACL for alekspoluh
after running:

REVOKE ALL ON public.kaspriv FROM alekspoluh;

psql (12.2 (Debian 12.2-2.pgdg100+1))
Type "help" for help.

sba=# REVOKE ALL ON public.kaspriv FROM alekspoluh;
REVOKE
sba=# \o result.txt
sba=#  \dp public.kaspriv

sba=# \q

#grep alekspoluh result.txt

Returns nothing. So output does not contain this role.


There must be something wrong with pgAdmin if it still shows REVOKE ALL for
that role after its ACL is gone.  Looking at the code, pgAdmin emits REVOKE ALL
for any grantee it find in the ACL.

https://github.com/pgadmin-org/pgadmin4/blob/REL-7_2/web/pgadmin/browser/server_groups/servers/databases/schemas/tables/utils.py#L712


I re-opened pgadmin.  alekspoluh role is no more displayed in kaspriv 
table sql window.


pgadmin shows only single reset role command. Now it shows

REVOKE ALL ON TABLE public.kaspriv FROM yllelohmus;

I ran

REVOKE ALL ON TABLE public.kaspriv FROM yllelohmus;

After that pgadmin shows next single revoke command:

REVOKE ALL ON TABLE public.kaspriv FROM villuuus;

It looks like pgAdmin shows only one REVOKE command but actually there 
are more revokes.


Should I ran separate revoke commands for every user to remove those 
revokes ?


pgAdmin also shows about 100 grant commands for users like

GRANT SELECT ON TABLE public.kaspriv TO paide;

How to remove user-spefic grants ?

Andrus.


Re: How to remove user specific grant and revoke

2023-06-03 Thread Andrus

Hi!

I tried

alter role  alekspoluh reset all

After this command pgAdmin still shows revoke and grant commands for 
alekspoluh  role.


How to remove all grant and revoke assignments for role ?


Andrus.

03.06.2023 20:50 Andrus kirjutas:


Hi!

> REVOKE must be executed by the grantor (sba_owner) or a superuser in 
case you

not already tried that.
REVOKE was executed by superuser, postgres. PgAdmin uses this user to 
log on.

Do you get any errors?


There are no errors.

Andrus.



Re: How to remove user specific grant and revoke

2023-06-03 Thread Andrus

Hi!



Or just execute those REVOKE ALL commands (except for PUBLIC) that pgAdmin
already gives you.


pgAdmin gives

REVOKE ALL ON TABLE public.kaspriv FROM someuser;

I ran it but pgAdmin still gives this statement.

Andrus.



How to remove user specific grant and revoke

2023-06-03 Thread Andrus

User groups table is defined as

    CREATE TABLE IF NOT EXISTS public.kaspriv
    (
    id serial primary key,
    user character(10) NOT NULL,
    group character(35) NOT NULL
    ...
    )

There are hundreds of users. Earlier time grant and revoke commands were 
executed for every user separately. Later revoke and grant commands for 
public were added:


    REVOKE ALL ON TABLE public.kaspriv FROM PUBLIC;
    GRANT SELECT ON TABLE public.kaspriv TO PUBLIC;

pgAdmin SQL tab still shows revoke and grant commands for every user also:

    REVOKE ALL ON TABLE public.kaspriv FROM PUBLIC;
    REVOKE ALL ON TABLE public.kaspriv FROM someuser;
    REVOKE ALL ON TABLE public.kaspriv FROM someotheruser;
    ...
    GRANT SELECT ON TABLE public.kaspriv TO PUBLIC;
    GRANT SELECT ON TABLE public.kaspriv TO someuser;
    GRANT SELECT ON TABLE public.kaspriv TO someother;
    ...

How to remove those unnecessary user-specific GRANT and REVOKE commands 
to make rights cleaner? pgAdmin does not have delete option for those.


Something like

    DROP REVOKE ALL ON TABLE public.kaspriv FROM all EXCEPT public;
    DROP GRANT SELECT ON TABLE public.kaspriv FROM all EXCEPT public;

This will be one-time action. It can be done manually in pgadmin or 
using some script running once.


Using

    PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu,
    compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit

and latest pgAdmin 7.2

Posted also in 
https://stackoverflow.com/questions/76394896/how-to-remove-unnecessary-grant-and-revoke-privileges-from-table


Andrus.


Re: How to speed up product code and subcode match

2023-05-23 Thread Andrus

Hi!

Using index

create index on toode ( *split_part( toode, '/',1) *)

and query

select toode.toode , n2, n3, n4

from toode, vordlusajuhinnak

where *split_part( toode.toode, '/',1) *= vordlusajuhinnak.toode;

reduces run time to 5 minutes.

Andrus.


23.05.2023 17:26 Andrus kirjutas:


Hi!


Separate
the product code and size into two columns --- if there's somebody
who really wants to see them in the above format, give them a
view or generated column.  Then instead of the impossible-to-optimize
queries you showed, you could do something like

select toode.toode , n2, n3, n4
from toode, vordlusajuhinnak
where toode.toode = vordlusajuhinnak.toode;


Can function index

create index on toode ( *split_part( toode, '/',1) *)

and query

select toode.toode , n2, n3, n4

from toode, vordlusajuhinnak

where *split_part( toode.toode, '/',1) *= vordlusajuhinnak.toode;


used and keeping existing table structure? Functional index should 
produce same speed improvement as using separate column?


Andrus.



Re: How to speed up product code and subcode match

2023-05-23 Thread Andrus

Hi!


Separate
the product code and size into two columns --- if there's somebody
who really wants to see them in the above format, give them a
view or generated column.  Then instead of the impossible-to-optimize
queries you showed, you could do something like

select toode.toode , n2, n3, n4
from toode, vordlusajuhinnak
where toode.toode = vordlusajuhinnak.toode;


Can function index

create index on toode ( *split_part( toode, '/',1) *)

and query

select toode.toode , n2, n3, n4

from toode, vordlusajuhinnak

where *split_part( toode.toode, '/',1) *= vordlusajuhinnak.toode;


used and keeping existing table structure? Functional index should 
produce same speed improvement as using separate column?


Andrus.



Re: How to speed up product code and subcode match

2023-05-23 Thread Andrus

Hi!

I ran

analyze toode;

create index vordlusajuhinnak_toode_pattern_idx on 
vordlusajuhinnak(toode bpchar_pattern_ops);


create index vordlusajuhinnak_toode_idx on vordlusajuhinnak(toode);
analyze vordlusajuhinnak;

Select runs now more than one hour. Output from explain

explain create table peatoode as
select toode.toode , n2, n3, n4
from toode, vordlusajuhinnak
WHERE toode.toode=vordlusajuhinnak.toode OR  toode.toode LIKE 
vordlusajuhinnak.toode||'/%'


"Gather  (cost=1000.55..443361894.55 rows=143828910 width=78)"
"  Workers Planned: 2"
"  ->  Nested Loop  (cost=0.55..428978003.55 rows=59928712 width=78)"
"    Join Filter: ((toode.toode = (vordlusajuhinnak.toode)::bpchar) 
OR (toode.toode ~~ ((vordlusajuhinnak.toode)::text || '/%'::text)))"
"    ->  Parallel Index Only Scan using toode_pkey on toode  
(cost=0.55..95017.93 rows=303869 width=60)"
"    ->  Seq Scan on vordlusajuhinnak  (cost=0.00..721.33 rows=39433 
width=32)"


with

Set enable_nestloop to off;

explain output is:

"Gather  (cost=1001000.55..10443361906.55 rows=143828910 width=78)"
"  Workers Planned: 2"
"  ->  Nested Loop  (cost=100.55..10428978015.55 rows=59928712 
width=78)"
"    Join Filter: ((toode.toode = (vordlusajuhinnak.toode)::bpchar) 
OR (toode.toode ~~ ((vordlusajuhinnak.toode)::text || '/%'::text)))"
"    ->  Parallel Index Only Scan using toode_pkey on toode  
(cost=0.55..95029.93 rows=303869 width=60)"
"    ->  Seq Scan on vordlusajuhinnak  (cost=0.00..721.33 rows=39433 
width=32)"


How to speed it up?

Andrus.

23.05.2023 14:32 Bzm@g kirjutas:

Great,

However I think it is still way to slow.
Next step is to run analyze also for the other table  vordlusajuhinnak.

And make sure you have an index on vordlusajuhinnak.toode similar to 
the index on toode.toode


--
Boris



Am 23.05.2023 um 12:56 schrieb Andrus :



Hi!

I ran analyze firma2.toode and changed where clause to use like:

create table peatoode as
select toode.toode , n2, n3, n4
from toode, vordlusajuhinnak
WHERE toode.toode=vordlusajuhinnak.toode OR  toode.toode LIKE 
vordlusajuhinnak.toode||'/%'


In this case it took 37 minutes, returned 277966 rows.

Thank you for help.

Andrus.

23.05.2023 11:24 Bzm@g kirjutas:

Also your row count is way off I guess. Did you ever run analyze bigtable?

--
Boris



Am 23.05.2023 um 10:22 schrieb...@2bz.de:

Hi there,

I guess the main problem is the nested loop.

As a quick recheck what happened if you run your query Without nested loop?

This is not a solution but a quickt test

In a Session

Set enable_nestedloop = off;
Explain Select your query ;
--
Boris



Am 23.05.2023 um 08:53 schrieb Andrus:

Hi!

Price list of main products vordlusajuhinnak contains 3 prices for product 
(column toode) and has 39433 products:

create table vordlusajuhinnak( toode varchar(60), n2 numeric(8,2), n3 
numeric(8,2), n4 numeric(8,2) );

toode column in unique, may be primary key in table and contains upper case 
letters, digits and - characters.

product table (toode) contains 733021 products:

CREATE TABLE toode (
grupp character(1),
toode character(60) primary key,
... lot of other columns
  );

Both tables have pattern indexes to speed up queries:

CREATE INDEX toode_toode_pattern_idx
ON toode (toode bpchar_pattern_ops ASC NULLS LAST);
-- This index is probably not used, should removed:
CREATE INDEX vordlusajuhinnak_toode_pattern_idx ON vordlusajuhinnak(toode 
bpchar_pattern_ops);

Product table as both main products and subproducts with sizes. Size is 
separated by main product code always by / character:


SHOE1-BLACK
SHOE1-BLACK/38
SHOE1-BLACK/41
SHOE1-BLACK/42
SHOE1-BLACK/43
SHOE2/XXL
SHOE2/L

Product codes contain upper characers only in this table also.

Trying to get prices for all products using

create table peatoode as
select toode.toode , n2, n3, n4
from toode, vordlusajuhinnak
where  toode.toode between vordlusajuhinnak.toode and 
vordlusajuhinnak.toode||'/z'

Takes 4.65 hours. How to speed this query up?
Output from explain:


"Nested Loop  (cost=0.55..272273178.12 rows=3202240012 width=78)""
->  Seq Scan on vordlusajuhinnak  (cost=0.00..721.33 rows=39433 width=32)" "  
->  Index Only Scan using toode_pkey on toode
(cost=0.55..6092.62 rows=81207 width=60)" "Index Cond: (toode

= (vordlusajuhinnak.toode)::bpchar)" "Filter: ((toode)::text <= 
((vordlusajuhinnak.toode)::text || '/z'::text))"

Using  PostgreSQL 13.2, compiled by Visual C++ build 1900, 64-bit in Windows 
server and psqlODBC driver.
Upgrading Postgres is possible, if this helps.

Tried also using like:

WHERE toode.toode=vordlusajuhinnak.toode OR
  toode.toode LIKE vordlusajuhinnak.toode||'/%'

Posted also in

https://stackoverflow.com/questions/76311957/how-to-match-character-columns-with-separated-subcolumns

Andrus.




How to speed up product code and subcode match

2023-05-23 Thread Andrus

Hi!

Price list of main products vordlusajuhinnak contains 3 prices for 
product (column toode) and has 39433 products:


    create table vordlusajuhinnak( toode varchar(60), n2 numeric(8,2), 
n3 numeric(8,2), n4 numeric(8,2) );


toode column in unique, may be primary key in table and contains upper 
case letters, digits and - characters.


product table (toode) contains 733021 products:

    CREATE TABLE toode (
    grupp character(1),
    toode character(60) primary key,
    ... lot of other columns
  );

Both tables have pattern indexes to speed up queries:

    CREATE INDEX toode_toode_pattern_idx
    ON toode (toode bpchar_pattern_ops ASC NULLS LAST);
    -- This index is probably not used, should removed:
    CREATE INDEX vordlusajuhinnak_toode_pattern_idx ON 
vordlusajuhinnak(toode bpchar_pattern_ops);


Product table as both main products and subproducts with sizes. Size is 
separated by main product code always by / character:



    SHOE1-BLACK
    SHOE1-BLACK/38
    SHOE1-BLACK/41
    SHOE1-BLACK/42
    SHOE1-BLACK/43
    SHOE2/XXL
    SHOE2/L

Product codes contain upper characers only in this table also.

Trying to get prices for all products using

    create table peatoode as
    select toode.toode , n2, n3, n4
    from toode, vordlusajuhinnak
    where  toode.toode between vordlusajuhinnak.toode and 
vordlusajuhinnak.toode||'/z'


Takes 4.65 hours. How to speed this query up?
Output from explain:

    > "Nested Loop  (cost=0.55..272273178.12 rows=3202240012 width=78)" "
    > ->  Seq Scan on vordlusajuhinnak  (cost=0.00..721.33 rows=39433 
width=32)" "  ->  Index Only Scan using toode_pkey on toode

    > (cost=0.55..6092.62 rows=81207 width=60)" "    Index Cond: (toode
    > >= (vordlusajuhinnak.toode)::bpchar)" "    Filter: 
((toode)::text <= ((vordlusajuhinnak.toode)::text || '/z'::text))"


Using  PostgreSQL 13.2, compiled by Visual C++ build 1900, 64-bit in 
Windows server and psqlODBC driver.

Upgrading Postgres is possible, if this helps.

Tried also using like:

    WHERE toode.toode=vordlusajuhinnak.toode OR
  toode.toode LIKE vordlusajuhinnak.toode||'/%'

Posted also in

https://stackoverflow.com/questions/76311957/how-to-match-character-columns-with-separated-subcolumns

Andrus.





How to pass table column values to function

2023-02-11 Thread Andrus

Hi!

Table source contains integer column. Its values should be passed to 
function for selecting data from other table.


I tried

    CREATE OR REPLACE FUNCTION 
public.TestAddAssetTransactions(dokumnrs int[])

 RETURNS int AS
    $BODY$

    with i1 as (
    INSERT INTO bilkaib (dokumnr)
    select dokumnr  from dok where dokumnr in (select * from 
unnest(dokumnrs))

    returning *
    )

    select count(*) from i1;
    $BODY$ language sql;

    create temp table bilkaib (dokumnr int ) on commit drop;
    create temp table dok (dokumnr serial primary key ) on commit drop;
    create temp table source (dokumnr int ) on commit drop;
    insert into source values (1),(2);

    select TestAddAssetTransactions( (select ARRAY[dokumnr] from 
source)::int[] )



but got error

> ERROR: more than one row returned by a subquery used as an expression

How to pass set of integers to function? Should temp table with fixed 
name used or is there better solution?


Using Postgresql 12+

Andrus.


Re: How to create directory format backup

2023-02-08 Thread Andrus

Hi!



How to create backup in format from which tables can selectively restored?

Dump as custom-format archive (-F custom) and use that with pg_restore and
options --table or --list/--use-list to select what should be restored.


How to select tables interactively like pgAdmin allows to select when 
directory format is used ?


Database contains hundreds of schemas. I need to restore public and 
other other schema.


Whole backup file is scanned to restore only two schemas. It takes lot 
of time.


Also directory format allows to use all cores with --jobs=32 parameter. 
Dump and partial restore using custom format are much slower.


Andrus.



--
Erik

Re: How to create directory format backup

2023-02-08 Thread Andrus

Hi!

>Looks like your filesystem on client is having limits on file sizes. 
Use better filesystem, or just dump on linux, it's filesystems usually 
don't hit these limits.


This file size is only 6.2 GB. If custom format is used, pg_dump creates 
large file without problems. There are no file size limits. Error 
message is not about this.


Backup client is running in Windows 11 and this cannot changed.

How to create backup in format from which tables can selectively restored?

Posted also in 
https://stackoverflow.com/questions/75387616/how-to-make-directory-format-backup


Andrus.



How to create directory format backup

2023-02-08 Thread Andrus

Hi!

Creating backup in directory format using

    pg_dump -f "sba" -Fdirectory --jobs=32 --verbose sba

throws error

    pg_dump: error: could not stat file "sba/282168.data.gz": value too 
large


How to fix it ?

Server is Postgres 12 running in Debian Linux 10 under WSL
Client is pg_dump from Postgres 15 running in Windows 11

Andrus


WAL contains references to invalid pages in hot standby

2022-11-26 Thread Andrus

Hi!

WAL playback in hot standby server crashes  with "WAL contains 
references to invalid pages" error :


2022-11-26 17:48:12.889 EET [497] LOG:  restored log file 
"000105790064" from archive
2022-11-26 17:48:20.897 EET [497] LOG:  restored log file 
"000105790065" from archive
2022-11-26 17:48:26.564 EET [497] LOG:  restored log file 
"000105790066" from archive
2022-11-26 17:48:32.019 EET [497] LOG:  consistent recovery state 
reached at 579/66954858
2022-11-26 17:48:32.019 EET [495] LOG:  database system is ready to 
accept read only connections
2022-11-26 17:48:32.019 EET [497] WARNING:  page 11 of relation 
base/54455050/83221012 is uninitialized
2022-11-26 17:48:32.019 EET [497] CONTEXT:  WAL redo at 579/66967DB0 for 
Heap2/VISIBLE: cutoff xid 167913422 flags 0x01
2022-11-26 17:48:32.019 EET [497] PANIC:  WAL contains references to 
invalid pages
2022-11-26 17:48:32.019 EET [497] CONTEXT:  WAL redo at 579/66967DB0 for 
Heap2/VISIBLE: cutoff xid 167913422 flags 0x01
2022-11-26 17:48:34.315 EET [495] LOG:  startup process (PID 497) was 
terminated by signal 6: Aborted
2022-11-26 17:48:34.315 EET [495] LOG:  terminating any other active 
server processes

2022-11-26 17:48:34.330 EET [495] LOG:  database system is shut down

It looks like file 000105790067 is corrupted. I looked into 
this file and it contains data like other WAL files.


How to fix this error so that Wal playback can continue?

Master server is in Postgres 12 in Debian 10.3

Hot standby in Postgres 12 in WSL Debian 11.5

Can Debian version 10.3/11.5 difference cause this exception?

Andrus.


How to create hot standby of Postgres 12/Debian 10.3 in Windows 11

2022-11-25 Thread Andrus

Hi!

Postgres 12 from Postgres repository is running in Debian 10.3 VPS server:

PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, 
compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit


(its upgrade is currently not an option).

Cluster should mirrored to Windows 11 workstation in real time over 
internet.


I installed Debian in Windows 11  using Linux Subsystem for Windows with

wsl --install Debian

This installs Debian 11.5

I tried to install Postgres 12 for replication in this Debian using

sudo apt install postgresql-12

but this fails with

E: Unable to locate package postgresql-12

How to install Postgres 12 in Debian 11.5 ?

Will this replication work since Debian versions are different but 
hopefully locale implementations are same ?


Which is most reasonable way to replicate whole Postgres 12/Debian 10.3  
cluster in Windows 11 ?


Andrus.


Re: How to return argument data type from sql function

2022-10-14 Thread Andrus

> Adrian Klaver recommends in



https://stackoverflow.com/questions/74061290/how-to-return-argument-datatype-from-sql-function#comment130780708_74061290

at this worked. In this best solution?

Padding a text typed output with actual significant spaces "works"? It 
is not equivalent to a bpchar with insignificant padding spaces...


You are right. I need char(n) type and this is not working.  How to use 
expression in cast, like


select torus(charcol) :: CHAR( ColWidth('public', 'test', 'charcol')  ) 
from test


This throws error in Postgres. ColWidth is immutable and called with 
constant arguments so it should work. How to fix postgres to allow 
constant ColWidth() expression in cast ?


Andrus.


Re: How to return argument data type from sql function

2022-10-14 Thread Andrus

Hi!

>Yeah, you could do that if you have the column information at hand.

Personally I'd also throw in "... and atttypid = 'bpchar'::regtype",
because that atttypmod calculation will give you garbage for types
other than bpchar and varchar.


I added this:

create or replace function public.ColWidth(p_namespace text, p_table 
text, p_field text)

    returns int as $f$
select atttypmod-4 from pg_namespace n, pg_class c, pg_attribute a
 where n.nspname = p_namespace and
    c.relnamespace = n.oid and
    c.relname = p_table and
    a.attrelid = c.oid and
    atttypid = 'bpchar'::regtype and
    a.attname = p_field;
$f$ LANGUAGE SQL ;

Tables with same name are in different schemas.

How to change this query so that it searches schemas in set search_path 
order and returns column width from it ? In this case p_namespace 
parameter can removed.


Or should it replaced with dynamic query like

execute 'select ' || p_field || ' from ' || p_table || ' limit 0'

and get column size from this query result somehow ?

Andrus.


Re: How to return argument data type from sql function

2022-10-14 Thread Andrus

Hi!

Making separate functions for text and bpchar works for me.

regression=# select pg_typeof(torus(f1)) from char_tbl;
  pg_typeof
---
  character


I tried

create or replace FUNCTION torus(eevarus bpchar) returns bpchar 
immutable AS $f$

select translate( $1, U&'\00f8\00e9', U&'\0451\0439' );
$f$ LANGUAGE SQL ;

create temp table test (
charcol char(10) );
insert into test values ('test');
select torus(charcol)
FROM Test

but it still returns result without trailing spaces. So it is not working.


Another possibility is to have just one function declared
to take and return anyelement.  You'd get failures at
execution if the actual argument type isn't coercible
to and from text (since translate() deals in text) but
that might be fine.


I tried

create or replace FUNCTION torus(eevarus anylement ) returns anylement 
immutable AS $f$

select translate( $1, U&'\00f8\00e9', U&'\0451\0439' );
$f$ LANGUAGE SQL ;

but got error

type anyelement does not exists.

Finally I tried

create or replace FUNCTION torus(eevarus text ) returns text immutable 
AS $f$

select translate( $1, U&'\00f8\00e9', U&'\0451\0439' );
$f$ LANGUAGE SQL ;

create or replace function public.ColWidth(p_namespace text, p_table 
text, p_field text)

    returns int as $f$
select atttypmod-4 from pg_namespace n, pg_class c, pg_attribute a
     where n.nspname = p_namespace and
     c.relnamespace = n.oid and
     c.relname = p_table and
     a.attrelid = c.oid and
     a.attname = p_field;
$f$ LANGUAGE SQL ;

create table public.test ( charcol char(10) );
insert into test values ('test');
select rpad ( torus(charcol),  colwidth('public', 'test', 'charcol') )
FROM Test

as Adrian Klaver recommends in

https://stackoverflow.com/questions/74061290/how-to-return-argument-datatype-from-sql-function#comment130780708_74061290

at this worked. In this best solution?

How to remove p_namespace  parameter from colwidth()? ColWidth() should 
return column width in first search_path table just like select ... from 
test finds table test.


Andrus.


How to return argument data type from sql function

2022-10-14 Thread Andrus

PostgreSQL 12.2+ function is defined as

    create FUNCTION torus(eevarus text) returns text immutable AS $f$
 select translate( $1, U&'\00f8\00e9', U&'\0451\0439' );
    $f$ LANGUAGE SQL ;

This function is called as CHAR(n) or text columns like

    create temp table test (
    charcol char(10),
    textcol text );

    insert into test values ('test', 'test');

    select torus(charcol), torus(textcol), charcol

torus(charcol) returns text column and loses original column width. How 
to force torus() to return argument type:


if char(n) column is passed as argument, torus() should also return 
char(n) data type.

I tried to use bpchar instead on text

    create or replace FUNCTION torusbpchar(eevarus bpchar) returns 
bpchar immutable AS $f$

 select translate( $1, U&'\00f8\00e9', U&'\0451\0439' );
    $f$ LANGUAGE SQL ;

torusbpchar(charcol) still returns text data type.

npgsql DataReader is used to get data.

Andrus.


Determine if range list contains specified integer

2022-05-27 Thread Andrus

Hi!


Product type table contains product types. Some ids may missing :

    create table artliik (liiginrlki char(3) primary key);
    insert into artliik values('1');
    insert into artliik values('3');
    insert into artliik values('4');
    ...
    insert into artliik values('999');

Property table contais comma separated list of types.

    create table strings ( id char(100) primary key, kirjeldLku chr(200) );
    insert into strings values ('item1', '1,4-5' );
    insert into strings values ('item2', '1,2,3,6-9,23-44,45' );

Type can specified as single integer, e.q 1,2,3 or as range like 6-9  or 
23-44

List can contain both of them.


How to all properties for given type.
Query

    select id
    from artliik
    join strings on ','||trim(strings.kirjeldLku)||',' like 
'%,'||trim(artliik.liiginrlki)||',%'


returns date for single integer list only.
How to change join so that type ranges in list like 6-9 are also returned?
Eq. f list contains 6-9, Type 6,7,8 and 9 shoud included in report.

Postgres 13 is used.

Posted also in

https://stackoverflow.com/questions/72404218/how-determine-if-range-list-contains-specified-integer

Andrus.


How to get updated order data

2022-04-08 Thread Andrus

Hi!

Orders are in table

    create table order (
  dokumnr int primary key,
  packno char(10)
  );
   insert into order dokumnr values (123);

One user sets pack number using

    update order set packno='Pack1' where dokumnr=123

3 seconds later other user retrieves pack number using

    select packno from order where dokumnr=123

However, other user gets null value, not Pack1 as expected. After some 
time later, correct value Pack1 is returned.


How to get updated data from other user immediately?
3 seconds is long time, it is expected that select suld retrieve update 
data.


There are lot of transactions running concurrently. Maybe update command 
is not written to database if second user retrieves it.


How to flush orders table so that current results are returned for 
second user select ?


Using

PostgreSQL 13.2, compiled by Visual C++ build 1914, 64-bit

and psqlODBC driver.

Andrus.


Re: How to split normal and overtime hours

2022-02-14 Thread Andrus

Hi!

It worked.

Thank you very much.

Andrus.

13.02.2022 16:46 Torsten Förtsch kirjutas:

WITH x AS (
   SELECT *
, sum(hours) OVER w AS s
 FROM hours
   WINDOW w AS (PARTITION BY person ORDER BY job_id)
)
SELECT *
, greatest(least(s, 120) - coalesce(lag(s, 1) OVER w, 0), 0) AS 
regular
, hours - greatest(least(s, 120) - coalesce(lag(s, 1) OVER w, 0), 
0) AS overtime

 FROM x
WINDOW w AS (PARTITION BY person ORDER BY job_id)


On Sun, Feb 13, 2022 at 1:57 PM Andrus  wrote:

Hi!

Thank you. In this result, regular and overtime columns contain
running totals.

How to fix this so that those columns contain just hours for each job?

sum on regular column should not be greater than 120 per person.

sum of regular and overtime  columns must be same as sum of hours
column in hours table for every person.

Andrus.

13.02.2022 14:46 Torsten Förtsch kirjutas:

something like

SELECT *
     , least(sum(hours) OVER w, 120) AS regular
     , greatest(sum(hours) OVER w - 120, 0) AS overtime
  FROM hours
WINDOW w AS (PARTITION BY person ORDER BY job_id);

 job_id | person | hours | regular | overtime
++---+-+--
      2 | bill   |    10 |  10 |    0
      5 | bill   |    40 |  50 |    0
      8 | bill   |    10 |  60 |    0
     10 | bill   |    70 | 120 |   10
     11 | bill   |    30 | 120 |   40
     13 | bill   |    40 | 120 |   80
     15 | bill   |    10 | 120 |   90
      4 | hugo   |    70 |  70 |    0
      7 | hugo   |   130 | 120 |   80
      1 | john   |    10 |  10 |    0
      3 | john   |    50 |  60 |    0
      6 | john   |    30 |  90 |    0
      9 | john   |    50 | 120 |   20
     12 | john   |    30 | 120 |   50
     14 | john   |    50 | 120 |  100


On Sun, Feb 13, 2022 at 12:47 PM Andrus  wrote:

Hi!

Hours table contains working hours for jobs:

    create table hours (
    jobid integer primary key, -- job done, unique for person
    personid char(10) not null, -- person who did job
    hours numeric(5,2) not null -- hours worked for job
    )

Hours more than 120 are overtime hours.

How to split regular and overtime hours into different
columns using running total by job id and partition by person id?

For example, if John did 3 jobs 1, 2,3 with 90, 50 and 40
hours (total 180 hours) for each job correspondingly, result
table should be:

    personid    jobid  normal_hours overtime_hours
    john    1 90   0
    john    2 30  20
    john    3  0  40

sum on normal_hours column should not be greater than 120 per
person.

sum of normal_hours and overtime_hours columns must be same
as sum of hours column in hours table for every person.

Note that since hours running total becomes greater than 120
in job 2, job 2 hours should appear in both hours columns.

Maybe window functions can used.

Andrus.


Re: How to split normal and overtime hours

2022-02-14 Thread Andrus

Hi!

Thank you. In this result, regular and overtime columns contain running 
totals.


How to fix this so that those columns contain just hours for each job?

sum on regular column should not be greater than 120 per person.

sum of regular and overtime  columns must be same as sum of hours column 
in hours table for every person.


Andrus.

13.02.2022 14:46 Torsten Förtsch kirjutas:

something like

SELECT *
     , least(sum(hours) OVER w, 120) AS regular
     , greatest(sum(hours) OVER w - 120, 0) AS overtime
  FROM hours
WINDOW w AS (PARTITION BY person ORDER BY job_id);

 job_id | person | hours | regular | overtime
++---+-+--
      2 | bill   |    10 |  10 |    0
      5 | bill   |    40 |  50 |    0
      8 | bill   |    10 |  60 |    0
     10 | bill   |    70 | 120 |   10
     11 | bill   |    30 | 120 |   40
     13 | bill   |    40 | 120 |   80
     15 | bill   |    10 | 120 |   90
      4 | hugo   |    70 |  70 |    0
      7 | hugo   |   130 | 120 |   80
      1 | john   |    10 |  10 |    0
      3 | john   |    50 |  60 |    0
      6 | john   |    30 |  90 |    0
      9 | john   |    50 | 120 |   20
     12 | john   |    30 | 120 |   50
     14 | john   |    50 | 120 |  100


On Sun, Feb 13, 2022 at 12:47 PM Andrus  wrote:

Hi!

Hours table contains working hours for jobs:

    create table hours (
    jobid integer primary key, -- job done, unique for person
    personid char(10) not null, -- person who did job
    hours numeric(5,2) not null -- hours worked for job
    )

Hours more than 120 are overtime hours.

How to split regular and overtime hours into different columns
using running total by job id and partition by person id?

For example, if John did 3 jobs 1, 2,3 with 90, 50 and 40 hours
(total 180 hours) for each job correspondingly, result table
should be:

    personid    jobid  normal_hours   overtime_hours
    john    1 90   0
    john    2 30  20
    john    3  0  40

sum on normal_hours column should not be greater than 120 per person.

sum of normal_hours and overtime_hours columns must be same as sum
of hours column in hours table for every person.

Note that since hours running total becomes greater than 120 in
job 2, job 2 hours should appear in both hours columns.

Maybe window functions can used.

Andrus.


How to split normal and overtime hours

2022-02-13 Thread Andrus

Hi!

Hours table contains working hours for jobs:

    create table hours (
    jobid integer primary key, -- job done, unique for person
    personid char(10) not null, -- person who did job
    hours numeric(5,2) not null -- hours worked for job
    )

Hours more than 120 are overtime hours.

How to split regular and overtime hours into different columns using 
running total by job id and partition by person id?


For example, if John did 3 jobs 1, 2,3 with 90, 50 and 40 hours (total 
180 hours) for each job correspondingly, result table should be:


    personid    jobid  normal_hours   overtime_hours
    john    1 90   0
    john    2 30  20
    john    3  0  40

sum on normal_hours column should not be greater than 120 per person.

sum of normal_hours and overtime_hours columns must be same as sum of 
hours column in hours table for every person.


Note that since hours running total becomes greater than 120 in job 2, 
job 2 hours should appear in both hours columns.


Maybe window functions can used.

Andrus.


ON CONFLICT DO NOTHING ignored on bulk insert

2021-05-25 Thread Andrus

Hi!

Looking for a method to do bulk insert ignoring product foreign key 
mismatches.


Only products which exist in product table should be added to price list.

Tried update/truncate/insert

    update pricelistnew set timestamp=to_char(now(), 'MMDDHH24MISS');
    truncate pricelist;
    insert into pricelist    select * from pricelistnew    on conflict 
do nothing;


but got error

 ERROR: insert or update on table "pricelist" violates foreign key
 constraint "pricelist_product_fkey"
 DETAIL: Key
 (product)=(TMMEM0EM00691BDS    ) is not present in table "product".

insert with foreign key check

    insert into pricelist
    select * from pricelistnew
    where product in (select product  from product)
    on conflict do nothing;

worked.

Why `on conflict do nothing` clause is ignored ?

How to add only products in product table without using check

    product in (select product  from product )

Price list has 30 rows and this command takes several minutes to 
complete on fast server. It locks tables so that other queries running 
same time are delayed. How to do this bulk insert efficiently ?


Using PostgreSQL 13.2, compiled by Visual C++ build 1900, 64-bit

Posted also in 
https://stackoverflow.com/questions/67683299/on-conflict-do-nothing-clause-is-ignored-on-insert


Andrus.



Re: SV: Log files polluted with permission denied error messages after every 10 seconds

2021-03-22 Thread Andrus

Hi!

Errors in pg_wal directory seems not to occur in patched version. Errors 
in pg_stat_tmp still occur. Yesterdays log introduces new error message


using stale statistics instead of current ones because stats collector 
is not responding


2021-03-21 03:02:23 EET stats collector LOG:  could not rename 
temporary statistics file "pg_stat_tmp/global.tmp" to 
"pg_stat_tmp/global.stat": Permission denied
2021-03-21 13:31:00 EET stats collector LOG:  could not rename 
temporary statistics file "pg_stat_tmp/global.tmp" to 
"pg_stat_tmp/global.stat": Permission denied
2021-03-21 15:15:52 EET stats collector LOG:  could not rename 
temporary statistics file "pg_stat_tmp/global.tmp" to 
"pg_stat_tmp/global.stat": Permission denied
2021-03-21 23:51:20 EET stats collector LOG:  could not rename 
temporary statistics file "pg_stat_tmp/global.tmp" to 
"pg_stat_tmp/global.stat": Permission denied
2021-03-21 23:51:25 EET autovacuum worker LOG:  using stale 
statistics instead of current ones because stats collector is not responding


Andrus.




Re: SV: Log files polluted with permission denied error messages after every 10 seconds

2021-03-20 Thread Andrus

Hi!

Today got 2 errors in patched version:

2021-03-20 20:31:27 EET stats collector LOG:  could not rename 
temporary statistics file "pg_stat_tmp/global.tmp" to 
"pg_stat_tmp/global.stat": Permission denied


2021-03-20 20:51:25 EET stats collector LOG:  could not rename 
temporary statistics file "pg_stat_tmp/global.tmp" to 
"pg_stat_tmp/global.stat": Permission denied


Andrus.



Re: SV: Log files polluted with permission denied error messages after every 10 seconds

2021-03-19 Thread Andrus

Hi!



Have you tested the unpatched builds?


No.

Andrus.




Re: SV: Log files polluted with permission denied error messages after every 10 seconds

2021-03-19 Thread Andrus

Hi!



Okay, cool.  I am going to send you privately two links to the builds
I am going to produce, 13.2 unpatched and 13.2 patched.


I replaced files in 13.1 server with ones from your patched version. 
There are no errors in log file now for 8 hours.


Andrus.




Re: SV: Log files polluted with permission denied error messages after every 10 seconds

2021-03-17 Thread Andrus

Hi!

BUFFER OVERFLOW result stack trace is

"Frame","Module","Location","Address","Path"
"0","FLTMGR.SYS","FltDecodeParameters + 
0x1c5d","0xf8019f72555d","C:\Windows\System32\drivers\FLTMGR.SYS"
"1","FLTMGR.SYS","FltDecodeParameters + 
0x17bc","0xf8019f7250bc","C:\Windows\System32\drivers\FLTMGR.SYS"
"2","FLTMGR.SYS","FltDecodeParameters + 
0x1328","0xf8019f724c28","C:\Windows\System32\drivers\FLTMGR.SYS"
"3","FLTMGR.SYS","FltDecodeParameters + 
0x111e","0xf8019f724a1e","C:\Windows\System32\drivers\FLTMGR.SYS"
"4","ntoskrnl.exe","IofCallDriver + 
0x59","0xf80051856109","C:\Windows\system32\ntoskrnl.exe"
"5","ntoskrnl.exe","KeIsAttachedProcess + 
0xf3","0xf80051929343","C:\Windows\system32\ntoskrnl.exe"
"6","ntoskrnl.exe","NtQueryInformationFile + 
0x492","0xf80051e8b5c2","C:\Windows\system32\ntoskrnl.exe"
"7","ntoskrnl.exe","setjmpex + 
0x7905","0xf800519e6705","C:\Windows\system32\ntoskrnl.exe"

"8","","0x7ffc7a17f9e4","0x7ffc7a17f9e4",""
"9","","0x7ffc7639af04","0x7ffc7639af04",""
"10","","0x7ffc7728fe86","0x7ffc7728fe86",""
"11","","0x7ffc7728f622","0x7ffc7728f622",""
"12","","0x7ffc77290a46","0x7ffc77290a46",""
"13","","0x14048ccca","0x14048ccca",""
"14","","0x14009463b","0x14009463b",""
"15","","0x140094365","0x140094365",""
"16","","0x14008e541","0x14008e541",""
"17","","0x140286f46","0x140286f46",""
"18","","0x1400a17ca","0x1400a17ca",""
"19","","0x1402923cb","0x1402923cb",""
"20","","0x1401b95c0","0x1401b95c0",""
"21","","0x14049f304","0x14049f304",""
"22","","0x7ffc77bb7974","0x7ffc77bb7974",""
"23","","0x7ffc7a13a2d1","0x7ffc7a13a2d1",""

It looks like too small buffer is passed to NtQueryInformationFile .

Andrus.




Re: SV: Log files polluted with permission denied error messages after every 10 seconds

2021-03-17 Thread Andrus

Hi!

Doesn't look like these are error messages.

There should be error messages that correspond to the error messages you see in 
the PostgreSQL log.


I excluded SUCCESS and DELETE PENDING results and included pg_wal path 
and postgres.exe process.


Log file contains only few entries, BUFFER OVERFLOW and NAME NOT FOUND

17:07:09,9618758    postgres.exe    11800 QueryAllInformationFile    
C:\Program Files\PostgreSQL\13\data\pg_wal\000100130021    
BUFFER OVERFLOW    CreationTime: 16.03.2021 20:04:57, LastAccessTime: 
17.03.2021 16:58:43, LastWriteTime: 17.03.2021 16:58:43, ChangeTime: 
17.03.2021 16:58:43, FileAttributes: A, AllocationSize: 16 777 216, 
EndOfFile: 16 777 216, NumberOfLinks: 1, DeletePending: False, 
Directory: False, IndexNumber: 0x3810009ffb4, EaSize: 0, Access: 
Read Attributes, Synchronize, Position: 0, Mode: Synchronous IO 
Non-Alert, AlignmentRequirement: Byte
17:07:09,9624993    postgres.exe    11800 QueryAllInformationFile    
C:\Program Files\PostgreSQL\13\data\pg_wal\000100130023    
BUFFER OVERFLOW    CreationTime: 16.03.2021 20:04:57, LastAccessTime: 
17.03.2021 17:07:09, LastWriteTime: 17.03.2021 17:07:09, ChangeTime: 
17.03.2021 17:07:09, FileAttributes: A, AllocationSize: 16 777 216, 
EndOfFile: 16 777 216, NumberOfLinks: 1, DeletePending: False, 
Directory: False, IndexNumber: 0x2000a010c, EaSize: 0, Access: Read 
Attributes, Synchronize, Position: 0, Mode: Synchronous IO Non-Alert, 
AlignmentRequirement: Byte
17:07:09,9630741    postgres.exe    11800 QueryAllInformationFile    
C:\Program Files\PostgreSQL\13\data\pg_wal\000100130024    
BUFFER OVERFLOW    CreationTime: 16.03.2021 20:04:57, LastAccessTime: 
16.03.2021 20:05:12, LastWriteTime: 16.03.2021 20:05:12, ChangeTime: 
16.03.2021 20:15:24, FileAttributes: A, AllocationSize: 16 777 216, 
EndOfFile: 16 777 216, NumberOfLinks: 1, DeletePending: False, 
Directory: False, IndexNumber: 0x2000a010d, EaSize: 0, Access: Read 
Attributes, Synchronize, Position: 0, Mode: Synchronous IO Non-Alert, 
AlignmentRequirement: Byte
17:07:09,9636100    postgres.exe    11800 QueryAllInformationFile    
C:\Program Files\PostgreSQL\13\data\pg_wal\000100130025    
BUFFER OVERFLOW    CreationTime: 16.03.2021 20:04:57, LastAccessTime: 
16.03.2021 20:05:12, LastWriteTime: 16.03.2021 20:05:12, ChangeTime: 
16.03.2021 20:15:24, FileAttributes: A, AllocationSize: 16 777 216, 
EndOfFile: 16 777 216, NumberOfLinks: 1, DeletePending: False, 
Directory: False, IndexNumber: 0x2000a010e, EaSize: 0, Access: Read 
Attributes, Synchronize, Position: 0, Mode: Synchronous IO Non-Alert, 
AlignmentRequirement: Byte
17:07:09,9641594    postgres.exe    11800 QueryAllInformationFile    
C:\Program Files\PostgreSQL\13\data\pg_wal\000100130026    
BUFFER OVERFLOW    CreationTime: 16.03.2021 20:04:57, LastAccessTime: 
16.03.2021 20:05:13, LastWriteTime: 16.03.2021 20:05:13, ChangeTime: 
16.03.2021 20:15:24, FileAttributes: A, AllocationSize: 16 777 216, 
EndOfFile: 16 777 216, NumberOfLinks: 1, DeletePending: False, 
Directory: False, IndexNumber: 0x2000a010f, EaSize: 0, Access: Read 
Attributes, Synchronize, Position: 0, Mode: Synchronous IO Non-Alert, 
AlignmentRequirement: Byte
17:07:09,9659048    postgres.exe    11800    CreateFile C:\Program 
Files\PostgreSQL\13\data\pg_wal    IS DIRECTORY Desired Access: Generic 
Read, Disposition: Open, Options: Synchronous IO Non-Alert, 
Non-Directory File, Attributes: N, ShareMode: Read, Write, Delete, 
AllocationSize: n/a
17:07:09,9661026    postgres.exe    11800 QueryAllInformationFile    
C:\Program Files\PostgreSQL\13\data\pg_wal    BUFFER OVERFLOW 
CreationTime: 10.02.2021 14:07:18, LastAccessTime: 17.03.2021 17:07:09, 
LastWriteTime: 17.03.2021 17:07:09, ChangeTime: 17.03.2021 17:07:09, 
FileAttributes: D, AllocationSize: 196 608, EndOfFile: 196 608, 
NumberOfLinks: 1, DeletePending: False, Directory: True, IndexNumber: 
0x100055e2b, EaSize: 0, Access: Read Attributes, Synchronize, 
Position: 0, Mode: Synchronous IO Non-Alert, AlignmentRequirement: Byte
17:07:09,9666396    postgres.exe    11800    CreateFile C:\Program 
Files\PostgreSQL\13\data\pg_wal\archive_status\000100130021.done 
NAME NOT FOUND    Desired Access: Read Attributes, Delete, Disposition: 
Open, Options: Non-Directory File, Open Reparse Point, Attributes: n/a, 
ShareMode: Read, Write, Delete, AllocationSize: n/a
17:07:09,9667813    postgres.exe    11800    CreateFile C:\Program 
Files\PostgreSQL\13\data\pg_wal\archive_status\000100130021.ready 
NAME NOT FOUND    Desired Access: Read Attributes, Delete, Disposition: 
Open, Options: Non-Directory File, Open Reparse Point, Attributes: n/a, 
ShareMode: Read, Write, Delete, AllocationSize: n/a
17:07:09,9668956    postgres.exe    11800    QueryDirectory C:\Program 
Files\PostgreSQL\13\data\pg_wal    NO MORE FILES FileInformationClass: 
FileBothDirectoryInformation


Andrus.


Re: SV: Log files polluted with permission denied error messages after every 10 seconds

2021-03-17 Thread Andrus

Hi!



I think what it would be most helpful to run "process monitor", so that you get
a log of the system calls and errors; perhaps that shows some details that
we cannot get from the error message.

Process monitor shows huge number of DELETE PENDING results from 
CreateFile operation:


15:22:35,1708187    postgres.exe    11800    CreateFile C:\Program 
Files\PostgreSQL\13\data\pg_wal\00010011003B    DELETE 
PENDING    Desired Access: Read Attributes, Delete, Synchronize, 
Disposition: Open, Options: Synchronous IO Non-Alert, Open Reparse 
Point, Attributes: n/a, ShareMode: Read, Write, Delete, AllocationSize: n/a


15:22:35,2713856    postgres.exe    11800    CreateFile C:\Program 
Files\PostgreSQL\13\data\pg_wal\00010011003B    DELETE 
PENDING    Desired Access: Read Attributes, Delete, Synchronize, 
Disposition: Open, Options: Synchronous IO Non-Alert, Open Reparse 
Point, Attributes: n/a, ShareMode: Read, Write, Delete, AllocationSize: n/a


15:22:35,3719483    postgres.exe    11800    CreateFile C:\Program 
Files\PostgreSQL\13\data\pg_wal\00010011003B    DELETE 
PENDING    Desired Access: Read Attributes, Delete, Synchronize, 
Disposition: Open, Options: Synchronous IO Non-Alert, Open Reparse 
Point, Attributes: n/a, ShareMode: Read, Write, Delete, AllocationSize: n/a


15:22:35,4723624    postgres.exe    11800    CreateFile C:\Program 
Files\PostgreSQL\13\data\pg_wal\00010011003B    DELETE 
PENDING    Desired Access: Read Attributes, Delete, Synchronize, 
Disposition: Open, Options: Synchronous IO Non-Alert, Open Reparse 
Point, Attributes: n/a, ShareMode: Read, Write, Delete, AllocationSize: n/a


Andrus.




Re: SV: Log files polluted with permission denied error messages after every 10 seconds

2021-03-17 Thread Andrus


>On Wed, Mar 17, 2021 at 09:25:00AM +0200, Andrus wrote:

pg_config --configure outputs

--enable-thread-safety --enable-nls --with-ldap --with-openssl --with-uuid
--with-libxml --with-libxslt --with-icu --with-tcl --with-perl --with-python

Thanks.  Do you actually use OpenSSL,
Not directly. Maybe server uses it for ssl connections but non-ssl 
connections are also allowed.

  LDAP

no.

uuid-ossp,

It is used to get guid in few places for compatibility with older servers.

xml2, PL/Perl PL/Python,

No.

or the XML datatype


In this server hopefully no. Application code contains xml parsing it 
but probably those queries are never running in this server.


Andrus.


for your applications there?  It may be
better if those custom builds have a minimum number of dependencies
filled, while still being compatible with what you do on those servers
so as they can still have some load applied.
--
Michael


Re: SV: Log files polluted with permission denied error messages after every 10 seconds

2021-03-17 Thread Andrus

Hi

>I am not completely sure which flags your installation has, but

pg_config --configure outputs

--enable-thread-safety --enable-nls --with-ldap --with-openssl 
--with-uuid --with-libxml --with-libxslt --with-icu --with-tcl 
--with-perl --with-python


Andrus.




Re: SV: Log files polluted with permission denied error messages after every 10 seconds

2021-03-17 Thread Andrus

Hi!


I am not completely sure which flags your installation has, but
another possibility is that I directly send to you two compiled
builds, one with the patch and one without it that you could directly
test.  I would not send that to the lists as an installation is rather
large, but I could just send you links from where you could download
both of them.  Then you would just need to stop the Postgres service,
do a drop-in deplacement of the binaries, and start again the Postgres
service.


I can try binaries in Intel server which runs Postgres 13.1. I hope its 
data directory is compatible with them.


Andrus.



Re: SV: Log files polluted with permission denied error messages after every 10 seconds

2021-03-16 Thread Andrus

Hi!

>Those ten seconds are coming from RemoveXlogFile(), where pgrename()

loops 100 times for 100ms before giving up.  So something holding up
the file's handle prevents the removal to happen.


I tried sysinternals

handle pg_wal

It shows that only postgres processes have opened files in pg_wal directory:

postgres.exe   pid: 11800  type: File   CC0: C:\Program 
Files\PostgreSQL\13\data\pg_wal\0001001200B7
postgres.exe   pid: 11800  type: File  12B8: C:\Program 
Files\PostgreSQL\13\data\pg_wal
postgres.exe   pid: 23904  type: File  1134: C:\Program 
Files\PostgreSQL\13\data\pg_wal\0001001200B5
postgres.exe   pid: 20908  type: File   17C: C:\Program 
Files\PostgreSQL\13\data\pg_wal\0001001200B7
postgres.exe   pid: 29892  type: File   C08: C:\Program 
Files\PostgreSQL\13\data\pg_wal\0001001100F4

...
postgres.exe   pid: 34260  type: File   C18: C:\Program 
Files\PostgreSQL\13\data\pg_wal\0001001100FC
postgres.exe   pid: 34484  type: File   C48: C:\Program 
Files\PostgreSQL\13\data\pg_wal\0001001100FC
postgres.exe   pid: 38740  type: File   180: C:\Program 
Files\PostgreSQL\13\data\pg_wal\0001001200B7




Attached is the
patch that should be tested, based on the suspected commit.  There are
actually two scenarios to worry about:
- Check that the code of 13.2 compiled manually is enough to see the
failure.
- Check that once the patch attached is applied makes the failure go
away.


Intel server has

PostgreSQL 13.1, compiled by Visual C++ build 1914, 64-bit

and AMD server

PostgreSQL 13.2, compiled by Visual C++ build 1914, 64-bit

Should I try install Visual C++ , compile and replace postgres.exe file 
in AMD server.



I am trying on my side to reproduce the problem in a more reliable
way.  One thing I saw breaking in my setup is archive_command, where
it was not able to archive a segment with a simple copy, failing with
the same error as yours.

wal archiving is not used on those servers.

In one of those servers, do you have in pg_wal/ some files named
xlogtemp.N?  N is an integer that would be the PID of the process that
generated it.


No. Intel server has 4 files with .deleted extension. AMD server has no 
.deleted files. It has probably lower traffic.


Andrus.



Re: SV: Log files polluted with permission denied error messages after every 10 seconds

2021-03-16 Thread Andrus

Hi

Where you getting the Postgres binaries from?


https://www.enterprisedb.com/downloads/postgres-postgresql-downloads


If we provide a patch, could you test it?

Yes

This would require that you do your own build,
unfortunately, but having an environment where this is easily
reproducible is a key thing.


I used C last time many years ago. I'm working in C# with Visual Studio 
2019.


I have two Windows 2019 servers. In Intel Xeon Cold 6226R server it 
occurs after every 10 seconds. Last logs:


2021-03-16 13:48:12 EET checkpointer LOG:  could not rename file 
"pg_wal/000100110097": Permission denied
2021-03-16 13:48:22 EET checkpointer LOG:  could not rename file 
"pg_wal/000100110098": Permission denied
2021-03-16 13:48:32 EET checkpointer LOG:  could not rename file 
"pg_wal/000100110099": Permission denied
2021-03-16 13:48:42 EET checkpointer LOG:  could not rename file 
"pg_wal/00010011009A": Permission denied
2021-03-16 13:48:52 EET checkpointer LOG:  could not rename file 
"pg_wal/00010011009D": Permission denied
2021-03-16 13:49:02 EET checkpointer LOG:  could not rename file 
"pg_wal/0001001100A0": Permission denied


In AMD Ryzen 3970X Threadripper 32 core processor server in also occurs 
many times per day day but less frequently. Last logs:


2021-03-16 13:45:20 EET checkpointerLOG:  could not rename file 
"pg_wal/0001004E00FB": Permission denied
2021-03-16 13:50:02 EET checkpointerLOG:  could not rename file 
"pg_wal/0001004E00FB": Permission denied
2021-03-16 13:50:13 EET checkpointerLOG:  could not rename file 
"pg_wal/0001004F0082": Permission denied


So It should be probably reproducible in any Windows 2019 server.

Andrus.



Re: How to recover data from 9.3 data directory

2021-03-13 Thread Andrus



Or you could just use the the Postgres APT repo, as example:

https://apt.postgresql.org/pub/repos/apt/dists/stretch-pgdg/9.3/
<https://apt.postgresql.org/pub/repos/apt/dists/stretch-pgdg/9.3/>


see also 
https://wiki.postgresql.org/wiki/Apt/FAQ#Where_are_older_versions_of_the_packages.3F 
<https://wiki.postgresql.org/wiki/Apt/FAQ#Where_are_older_versions_of_the_packages.3F>




I discovered that Debian allows to install 9.3 side-by side with 12

apt-get install postgresql-9.3

Andrus.



Re: How to recover data from 9.3 data directory

2021-03-13 Thread Andrus

Hi!

>Debian does not offer to install Postgres 9.3 into it. How to install 
postgres 9.3 in new Debian ?




>You'll have to compile it.


Enterpisedb offers 9.3 download for Linux in 
https://www.enterprisedb.com/downloads/postgres-postgresql-downloads


Maybe it is possible to install it in new Debian ?

Or is it possible to download ISO image with Postgres 9.3 installed ?

Or should I try to find old Debian ISO image and install it into this?

Andrus.


Re: How to recover data from 9.3 data directory

2021-03-13 Thread Andrus

   Should I create virtual machine and install Linux with 9.3 into it
   or is there simpler solution?

that's the only option, unless you already have some compatible server 
around.


I have new Debian virtual machine with Postgres 12  .

Debian does not offer to install Postgres 9.3 into it. How to install 
postgres 9.3 in new Debian ?


Andrus.



Re: How to recover data from 9.3 data directory

2021-03-13 Thread Andrus

Hi!

> that won't work on windows. pg_upgrade only works if source and 
target environment are binary compatible, which isn't the case for 
windows / GNU Linux.


wal archiving works between Linux and Windows. So I expected those 
commands to work also.


How to determine is this data from 32 or 64 bit server ?

Should I create virtual machine and install Linux with 9.3 into it or is 
there simpler solution?


Will data from 32 bit server require 32 bit Linux with 32 bit postgres 
or will it work with 64 bit Linux also ?


Andrus.



How to recover data from 9.3 data directory

2021-03-13 Thread Andrus

Hi!

There is data directory from Postgres 9.3 server from old Centos.

Centos server is not running anymore and there is window 10 workstation 
to recover.


1. Tried to recover data from it installing 9.3 and 13 servers in 
windows 10 and using


"C:\Program Files\PostgreSQL\13\bin\pg_upgrade.exe" ^
--old-datadir "D:/Centos93Data/data" ^
--new-datadir "C:/Program Files/PostgreSQL/13/data" ^
--old-bindir "C:/Program Files/PostgreSQL/9.3/bin" ^
--new-bindir "C:/Program Files/PostgreSQL/13/bin"

It complained that postmaster seems working. I deleted postmaster.pid 
file. Now it throws  error


Performing Consistency Checks
Checking cluster versions ok
The source cluster was not shut down cleanly.
Failure, exiting

2. Tried to run old server using

C:\Program Files\PostgreSQL\9.3\bin\pg_ctl.exe" start -D 
"D:/Centos93Data/data"


got error

FATAL:  database files are incompatible with server

DETAIL:  The database cluster was initialized with USE_FLOAT8_BYVAL but 
the server was compiled without USE_FLOAT8_BYVAL.

HINT:  It looks like you need to recompile or initdb.

I tried both 32 and 64 bit 9.3 servers in windows but error is the same.

How to recover data from 9.3 linux directory ?

Andrus.


Re: SV: Log files polluted with permission denied error messages after every 10 seconds

2021-03-10 Thread Andrus
00A0071
postgres.exe    25840    File    C:\Program 
Files\PostgreSQL\13\data\pg_wal\0001000A0072
postgres.exe    20048    File    C:\Program 
Files\PostgreSQL\13\data\pg_wal\0001000A0071
postgres.exe    30156    File    C:\Program 
Files\PostgreSQL\13\data\pg_wal\0001000A0071
postgres.exe    26976    File    C:\Program 
Files\PostgreSQL\13\data\pg_wal\0001000A0071
postgres.exe    26312    File    C:\Program 
Files\PostgreSQL\13\data\pg_wal\0001000A0074


...

Many wal files are used by multiple processes.

Andrus.


Re: SV: Log files polluted with permission denied error messages after every 10 seconds

2021-03-10 Thread Andrus

Today log file contains

2021-03-10 11:01:58 EET checkpointer LOG:  could not rename file 
"pg_wal/0001000A003B": Permission denied


messages. Resource Manager shows that 0001000A003B is is 
locked by process 30376



According to pg_stat_activity this is regular postgres process which is 
in idle state and waiting for ClientRead event. It has executed last 
query  an hour ago.


It looks like wal files are not released after usage.

Andrus.




Re: SV: Log files polluted with permission denied error messages after every 10 seconds

2021-03-09 Thread Andrus

Any hints in Windows event viewer? Events  occurring at the same time showing 
up there.


Looked into Administrative Events/Custom views and few others. There are 
no messages about this. Windowsi perfomance monitor and Filemon show 
files opened by process.


How to do reverse: log processes and threads which use files in pg_wal 
directory ?


Is there some utility for this or can restart manager or other API  used 
to create such log?


Andrus.




Re: Log files polluted with permission denied error messages after every 10 seconds

2021-03-08 Thread Andrus



Yeah, I've wondered for some time if the apparent EACCES code is
actually being mapped from something other than ERROR_SHARING_VIOLATION
in (some of?) these reports.  The hard part is to hold onto the
Windows error code alongside errno without massively invasive changes.


Permission denied error occur in  Windows. Debian logs doesnt contain it.

In Windows also pg_database_size('mydb')  and similar ones fail 
frequently with permission denied error.


I got information about r1soft idera backup client used. It runs only 
once per night for 16 minutes and compares disk blocks to find changed 
blocks to backup. So it should not cause permission denied errors after 
every 10 seconds.


Andrus.



Re: Log files polluted with permission denied error messages after every 10 seconds

2021-03-08 Thread Andrus



Another thought: if it's not a sharing violation, I wonder if we
should consider dumping more raw Windows error information in the
messages we log, because, if I recall correctly, we're converting many
Windows error codes into few Unix-style error numbers and thereby
throwing away valuable clues.  It makes it a bit more confusing when
trying to ask a Windows expert what might be happening.


Knowing process name holding file and other detailed information would 
be very useful.


In other Win 2019 server/Postgres 13  excluding Postgres drive from 
fprot scan seems to decrease those errors.


However they still occur even if no fprot is active.

There are also regular stat errors in windows servers is size queries  like

ERROR: could not stat file "base/45010/172654232": Permission denied;

in query

SELECT pg_size_pretty(pg_database_size('eeva')::bigint)::char(10) as size

and

ERROR: could not stat file "base/45010/172654232": Permission denied;

in query

    SELECT
pg_catalog.pg_size_pretty(SUM(
        CASE WHEN pg_catalog.has_database_privilege(d.datname, 'CONNECT')
        THEN pg_catalog.pg_database_size(d.datname)
        ELSE 0
        END)::bigint) ::char(50) AS SIZE
    FROM pg_catalog.pg_database d

and

ERROR: could not stat file "base/45010/172654232": Permission 
denied;Error while executing the query



    SELECT d.datname::char(15),
     pg_catalog.pg_get_userbyid(d.datdba)::char(18) AS Owner,
        CASE WHEN pg_catalog.has_database_privilege(d.datname, 'CONNECT')
        THEN 
pg_catalog.pg_size_pretty(pg_catalog.pg_database_size(d.datname)::bigint)

        ELSE 'No Access'
        END::char(50) AS SIZE
    FROM pg_catalog.pg_database d
        ORDER BY
        CASE WHEN pg_catalog.has_database_privilege(d.datname, 'CONNECT')
        THEN pg_catalog.pg_database_size(d.datname)
        ELSE NULL
        END DESC -- nulls first

Andrus.



Re: Log files polluted with permission denied error messages after every 10 seconds

2021-03-07 Thread Andrus

Hi!

On Windows, RemoveXlogFile() would still rename a given WAL segment

file with a ".deleted" suffix with ou without wal_recycle in the case
where the a recycling of a WAL segment is not necessary, for example
if max_wal_size is already full.  So this has no effect.


Should files with .deleted extension deleted manually to save disk space 
? May of them have dates before today.


Andrus.



Re: Log files polluted with permission denied error messages after every 10 seconds

2021-03-06 Thread Andrus

Hi!

These are generated on Windows when removing a past WAL segment, where 
the process involves a rename followed by durable_unlink() that would

generate some LOG entries in the logs if the internal unlink() failed
(see RemoveXlogFile() in xlog.c).


I changed wal_recycle to off. So checkpointer should no more try to 
rename wal files. Iit still tries to rename files. No idea way it does 
not use this setting:


2021-03-06 19:44:28 EET checkpointer LOG:  could not rename file 
"pg_wal/0001000600EB": Permission denied


2021-03-06 19:44:38 EET checkpointer LOG:  could not rename file 
"pg_wal/0001000600F0": Permission denied
2021-03-06 19:44:48 EET checkpointer LOG:  could not rename file 
"pg_wal/0001000600F7": Permission denied
2021-03-06 19:44:58 EET checkpointer LOG:  could not rename file 
"pg_wal/0001000600FB": Permission denied
2021-03-06 19:45:08 EET postmaster LOG:  received SIGHUP, reloading 
configuration files
2021-03-06 19:45:08 EET postmaster LOG:  parameter "wal_recycle" 
changed to "off"
2021-03-06 19:45:08 EET checkpointer LOG:  could not rename file 
"pg_wal/0001000600FE": Permission denied
2021-03-06 19:45:18 EET checkpointer LOG:  could not rename file 
"pg_wal/0001000600FF": Permission denied
2021-03-06 19:45:28 EET checkpointer LOG:  could not rename file 
"pg_wal/00010007": Permission denied
2021-03-06 19:45:38 EET checkpointer LOG:  could not rename file 
"pg_wal/000100070002": Permission denied
2021-03-06 19:45:48 EET checkpointer LOG:  could not rename file 
"pg_wal/000100070011": Permission denied
2021-03-06 19:45:59 EET checkpointer LOG:  could not rename file 
"pg_wal/000100070012": Permission denied


2021-03-06 19:46:09 EET checkpointer LOG:  could not rename file 
"pg_wal/000100070015": Permission denied


2021-03-06 19:46:19 EET checkpointer LOG:  could not rename file 
"pg_wal/000100070016": Permission denied


Should chekpointer process terminated to force it to use new setting. Is 
it safe to kill it during database usage.


Andrus.


Re: Log files polluted with permission denied error messages after every 10 seconds

2021-03-06 Thread Andrus

Hi!

> On which files are those complaints?

log contains file names:

...

2021-03-06 10:27:51.468 EET [4580] LOG:  could not rename file 
"pg_wal/000100060092": Permission denied
2021-03-06 10:28:01.526 EET [4580] LOG:  could not rename file 
"pg_wal/000100060098": Permission denied
2021-03-06 10:28:11.582 EET [4580] LOG:  could not rename file 
"pg_wal/000100060099": Permission denied
2021-03-06 10:28:21.637 EET [4580] LOG:  could not rename file 
"pg_wal/0001000600E8": Permission denied
2021-03-06 10:28:31.692 EET [4580] LOG:  could not rename file 
"pg_wal/0001000600EA": Permission denied
2021-03-06 10:28:41.750 EET [4580] LOG:  could not rename file 
"pg_wal/0001000600EB": Permission denied
2021-03-06 10:28:51.807 EET [4580] LOG:  could not rename file 
"pg_wal/0001000600EC": Permission denied
2021-03-06 10:29:01.864 EET [4580] LOG:  could not rename file 
"pg_wal/0001000600ED": Permission denied
2021-03-06 10:29:11.920 EET [4580] LOG:  could not rename file 
"pg_wal/0001000600EE": Permission denied
2021-03-06 10:29:21.976 EET [4580] LOG:  could not rename file 
"pg_wal/0001000600EF": Permission denied
2021-03-06 10:29:32.033 EET [4580] LOG:  could not rename file 
"pg_wal/0001000600F0": Permission denied


Server was installed some days ago. Numbers in end of file names are 
small. So it looks like almost every wal file in timeline causes entry.



It seems to me that you may have
more going on in this system that interacts with your data folder than
you think.


There is Windows server backup utility client from 
https://www.r1soft.com/  . It looks like it performs block-level 
realtime backup of HDD .


Its about box shows last year 2015 , but Windows Server 2019 is used. 
Maybe it is outdated and causes the issue. It is maintained by by ISP 
and I cannot stop it easily.


I havent found an option in its config to susped or configure it.

Config contains

max_wal_size = 1GB

There are 67 files in pg_wal. Earlier have have similar errors in my app 
if windows temporary directory contains  20 files by mistake. 
Cleaning temp directory and adding random number to temporary file names 
seems to fix this.


Should max_wal or or other param size decrased or increased to avoid 
file access conflict.




pg_wal also contains files with .deleted extension like

0001000500B2.deleted

These are generated on Windows when removing a past WAL segment, where
the process involves a rename followed by durable_unlink() that would
generate some LOG entries in the logs if the internal unlink() failed
(see RemoveXlogFile() in xlog.c).


.deleted files have dates ealier dates (yesterday and 4th or March). 
Almost all regular wal files have todays date.


There are about 30 .deleted files from totel 67 files in pg_wal. Will 
postgres remove .deleted files automatically or should I create windows 
task which deletes them periodically ?


Andrus.




Re: Log files polluted with permission denied error messages after every 10 seconds

2021-03-05 Thread Andrus

Hi!

>Windows? Don't let anti-virus software mess with the data directory.

Windows default Windows Defender is active. I excluded data, pg_wal 
folders and postgres process:



Then turned real-time protection off:

Problem persists. New entry is written after every 10 seconds.

pg_wal also contains files with .deleted extension like

0001000500B2.deleted

Andrus.




Log files polluted with permission denied error messages after every 10 seconds

2021-03-05 Thread Andrus

Hi!

Postgres 13.1 is installed in windows server. There are about 100 users.

Log files contain huge number of permisson denied error messages in 
every day like :


2021-03-04 01:38:49.818 EET [4580] LOG:  could not rename file 
"pg_wal/00010005001B": Permission denied
2021-03-04 01:48:42.725 EET [4580] LOG:  could not rename file 
"pg_wal/00010005001B": Permission denied
2021-03-04 01:53:22.427 EET [4580] LOG:  could not rename file 
"pg_wal/00010005001B": Permission denied
2021-03-04 01:58:19.623 EET [4580] LOG:  could not rename file 
"pg_wal/00010005001B": Permission denied
2021-03-04 02:03:24.080 EET [4580] LOG:  could not rename file 
"pg_wal/00010005001B": Permission denied

2021-03-04 02:43:17.983 EET [7764] LOG:  invalid length of startup packet
2021-03-04 02:43:21.634 EET [4580] LOG:  could not rename file 
"pg_wal/00010005001B": Permission denied
2021-03-04 02:48:25.048 EET [4580] LOG:  could not rename file 
"pg_wal/00010005001B": Permission denied
2021-03-04 02:53:19.229 EET [4580] LOG:  could not rename file 
"pg_wal/00010005001B": Permission denied
2021-03-04 02:58:19.176 EET [4580] LOG:  could not rename file 
"pg_wal/00010005001B": Permission denied
2021-03-04 03:03:19.136 EET [4580] LOG:  could not rename file 
"pg_wal/00010005001B": Permission denied
2021-03-04 03:29:33.926 EET [2460] FATAL:  expected SASL response, got 
message type 0
2021-03-04 03:36:04.995 EET [4816] LOG:  could not rename temporary 
statistics file "pg_stat_tmp/global.tmp" to "pg_stat_tmp/global.stat": 
Permission denied
2021-03-04 03:36:09.666 EET [10340] LOG:  using stale statistics instead 
of current ones because stats collector is not responding
2021-03-04 03:53:20.036 EET [4580] LOG:  could not rename file 
"pg_wal/00010005001B": Permission denied
2021-03-04 04:03:19.001 EET [4580] LOG:  could not rename file 
"pg_wal/00010005001B": Permission denied


On middle of day where usage is intensive those appear exactly after 
every 10 seconds:


2021-03-05 12:08:40.001 EET [4580] LOG:  could not rename file 
"pg_wal/000100050078": Permission denied
2021-03-05 12:08:50.059 EET [4580] LOG:  could not rename file 
"pg_wal/00010005007D": Permission denied
2021-03-05 12:09:00.115 EET [4580] LOG:  could not rename file 
"pg_wal/000100050082": Permission denied
2021-03-05 12:09:10.171 EET [4580] LOG:  could not rename file 
"pg_wal/000100050088": Permission denied
2021-03-05 12:09:20.224 EET [4580] LOG:  could not rename file 
"pg_wal/0001000500A4": Permission denied
2021-03-05 12:09:30.281 EET [4580] LOG:  could not rename file 
"pg_wal/0001000500A8": Permission denied
2021-03-05 12:09:40.339 EET [4580] LOG:  could not rename file 
"pg_wal/0001000500B0": Permission denied
2021-03-05 12:09:50.397 EET [4580] LOG:  could not rename file 
"pg_wal/0001000500E7": Permission denied
2021-03-05 12:10:00.456 EET [4580] LOG:  could not rename file 
"pg_wal/0001000500E8": Permission denied
2021-03-05 12:10:10.514 EET [4580] LOG:  could not rename file 
"pg_wal/0001000500E9": Permission denied
2021-03-05 12:10:20.570 EET [4580] LOG:  could not rename file 
"pg_wal/00010005000000EB": Permission denied
2021-03-05 12:10:30.626 EET [4580] LOG:  could not rename file 
"pg_wal/0001000500EC": Permission denied


How to fix this ?

It looks like database is working normally, users havent reported any 
issues.


Andrus.



Re: Duplicate key error

2021-03-04 Thread Andrus

Hi!

>I just meant a regular update (which might impact 0 rows) and then 
insert (not exists) like you are doing already.


This causes duplicate key exception if other process adds same row to 
table at same time.




>--transaction still ways. Should manual locking used or is there 
better method.

>I don't follow what you mean.


Allow access to this table for single transaction only. Wait for 
exclusive lock , do update/insert, release exclusive lock.


Andrus.



Re: Duplicate key error

2021-03-04 Thread Andrus

Hi

>Why just do a plain update, relying on row level locking to serialize 
requests properly, and then just do an insert where not exists? Is there 
value in doing the delete? I don't see it.


This is an option. How to do update+insert in 9+  in SQL  ? Or should 
plpgsql procedure created for this ?


After insert if other process inserts same key value, transaction still 
ways. Should manual locking used or is there better method.


Andrus.


Re: Duplicate key error

2021-03-04 Thread Andrus

Hi!


Since we have not actually seen the entire script nor have any idea 
what the other process is, there is no way to answer this.


This is the same whole script. It will ran by multiple scheduled tasks, 
maybe at same time.


It registers logged in user. Different processes may have same user 
name. In this case one row should remain.


I can just ignore duplicate key error on commit but maybe there is some 
better way not to cause error.


Andrus.




Re: Duplicate key error

2021-03-03 Thread Andrus

Hi!
says something else is inserting/updating using that key value. So 
obviously your script is not catching all the conflicts.


> At this point your best bet is to monitor the Postgres log and see 
what else is happening at the time of the error. I'm guessing you will 
find another process working on that table.


It looks like other process has added same row during trancaction.

How to change script so that it works starting at Postgres 9.0 and does 
not cause error in this case ?


Andrus.



Re: Duplicate key error

2021-03-03 Thread Andrus

Hi!

>It sounds like this should be re-written as 'insert on conflict do 
update' statement.


It should work in Postgres 9 also. on confilct is not available in 
postgres 9.


This code does not insert duplicate key valuse ? Why it throws error ?

You can create testcase to run this code from multiple processes to try 
reproduce the error.


Andrus.




Re: Duplicate key error

2021-03-03 Thread Andrus

Hi!


And if that name already exists there would be a duplicate key error.


Name cannot exist:  First delete deletes its value. Tables are not 
changed by external process during transaction. As double insurance, 
insert perfoms additional existence check and adds only if key does not 
exist.


>Last inserted row data should remain.


I'm not understanding the above.


This table should contain last login time, user and ip address. It 
should be updated by every process on login. For this old entry is 
removed if it exists and new entry with same primary key is added.


Andrus.




Re: Duplicate key error

2021-03-03 Thread Andrus

Hi!


There is no other process inserting  to this table?

There may be other processes in this server trying to insert same 
primary key value (server name).


Last inserted row data should remain.

Andrus.




Duplicate key error

2021-03-03 Thread Andrus

Hi!

Sometimes duplicate key error

    duplicate key value violates unique constraint "session_pkey"
    Key (workplace)=(WIN-N9BSKUNKBC8 ) already exists.

occurs in script:

    delete from session where workplace='WIN-N9BSKUNKBC8' ;
    INSERT INTO session (workplace,ipaddress,logintime,loggeduser)
    SELECT 'WIN-N9BSKUNKBC8' , 
inet_client_addr()::CHAR(14),current_timestamp::CHAR(28),CURRENT_USER 
WHERE NOT EXISTS (SELECT 1 FROM session WHERE workplace='WIN-N9BSKUNKBC8' )


Sript is running form windows task scheduler on every 10 minutes. Error 
occurs only sometimes.


How to fix this ?
Table is defined as

    CREATE TABLE public.session
    (
    loggeduser character(10) ,
    workplace character(16) NOT NULL,
    ipaddress character(20) ,
    logintime character(28) ,
    activity timestamp with time zone,
    CONSTRAINT session_pkey PRIMARY KEY (workplace)
    )


Environment:

    PostgreSQL 13.1, compiled by Visual C++ build 1914, 64-bit
    Windows server 2019
    psqlODBC driver 13.00.


I already asked this in this list many years ago. In this case Tom 
replies that is looks like index race condition bug which was fixed in 
later Postgres 9 version.


However this issue still exists in Postgres 13.1

Andrus.



Turn jit off for slow subquery in Postgres 12

2021-02-15 Thread Andrus

Hi!

Last year I posted testcase [1] which shows that using jit makes execution
magnitude slower ( 2.5s vs 0.4 sec) in typical shopping cart application 
product search in Postgres 12.


There are also other reports on this [2,3].

I tried to turn jit off for subquery using

select ...  from
(
set jit to off;
select ... from cartdata, ...  where 
set jit to on
)

But this cause syntax error.

How to turn off jit for specific subquery in Postgres 12 ?

Subquery is generated by EF Core and splitting it to multiple statements 
is not possible.


Postgres upgrade is not planned.

Andrus.

[1] 
https://www.postgresql.org/message-id/A2E2572094D4434CAEB57C80085B22C7@dell2
[2] 
https://www.postgresql.org/message-id/CAHOFxGo5xJt02RmwAWrtv2K0jcqqxG-cDiR8FQbvb0WxdKhcgw%40mail.gmail.com

[3]https://www.postgresql.org/message-id/629715.1595630222%40sss.pgh.pa.us



Re: permission denied for large object 200936761

2021-02-02 Thread Andrus

Hi!

So at that point the deed has been done.

The questions to ask:

1) Why the driver thinks it is being  passed a large object in the 
first place?


Source data type was binary. It was mapped to oid for unknown reason.




2) Have there been any recent changes to code that passes through the 
ODBC driver that would account for 1)?
3) To help with 2), where is 'INSERT INTO report ( ...  ) values (.. , 
'200936767'::lo, ... )" coming from?

My suspicion is that it is user initiated change.

This change was done by me. I added new reports from other database. 
Reports contain primary columns and import throws error about unknown lo 
type.


I removed bonary columns from import and after that import succeeds.

It looks like during this procces 3 rows were added to large object 
metadata table.


I used

select lo_unlink(oidnumber)

to remove them.

>If it is not and you suspect the ODBC driver then I would suggest 
bringing it up on the -odbc list:

>https://www.postgresql.org/list/pgsql-odbc/

I created binary data in client side creating FoxPro cursor

create cursor t ( t gen  )

and used psqlodbc to insert this data:

create temp table test ( test bytea ) on commit drop;
insert into test values ( ?t.t );

This code throws exception

type "lo" does not exist

but each call adds new row to pg_largeobject_metadata table.

Odbc driver creates large object and adds lo cast. This large object 
remains even if transaction is rolled back due to unexisting lo type.


C7=0  (bytea as logvarbinary is false) is used in connection string.

Andrus.




Re: permission denied for large object 200936761

2021-02-02 Thread Andrus

Hi!



So?  What is your point?
Somebody created a large object of size 0.


report table has bytea column. It looks like  psqlodbc driver adds ::lo 
cast  when inserting binary data:


https://github.com/hlinnaka/psqlodbc/blob/master/convert.c#L4564

and this adds row to pg_largeobject_metadata table.

Why it adds cast to lo type ? This type does not exist in Postgres 
server and causes server error.


Andrus.




Re: permission denied for large object 200936761

2021-02-02 Thread Andrus

Hi!


I don't suppose this was done in a structured way that could be gone 
back over?



Accidently  '200936767'::lo cast was issued :

INSERT INTO report ( ...  ) values (.. , '200936767'::lo, ... )

server throws error   type "lo" does not exist for this.

Maybe this causes orphan large object creation by server or by odbc 
driver. How to fix this ?


report table shoud not have lo type columns. No idea why this cast is 
generated using psqlodbc


Andrus.



Re: permission denied for large object 200936761

2021-02-02 Thread Andrus

Hi!

>I have imported data from other clusters and executed lot of different 
sql commands.  I have used grant, revoke, reassign commands to change 
privileges for other users and have deleted and added users.




I don't suppose this was done in a structured way that could be gone 
back over?


Exact command sequence cannot restored.

I have script to normalize rights for user. It removes all rights first 
and set desired rights afterwards.


This was executed lot of times, it is used for years. Also there were 
manual user rights adjustments using sql commands in cases there this 
universal script cannot used. There are approx. 300 postgres roles in 
cluster, users are changing in every week.


Previous dumpuser backup which suceeds was at January 4th,  9 MB in 
custom format. There are nightly backups of databases is cluster.


There is also hot standby, base backups in every sunday using 
pg_basebackup and WAL archiving.  WAL archiving and hot standby was 
broken in previous week (I increased max_connections=400 in main server 
but forget to increase this in standby server, WAL archiving is also 
from hot standby server).





You could try some of the functions here:

https://www.postgresql.org/docs/12/lo-funcs.html

to see if you can figure it out.

There is only one function , lo_get() in this page which returns 
data. I tried


Actually there is a second lo_export() at bottom of page. It needs 
superuser privilege and access to the server file system.


Tried in server using psql

select lo_export(200936761,'large1');
select lo_export(200936762,'large2');
select lo_export(200936767,'large3');

result files have zero size.

>What happens if you query:


https://www.postgresql.org/docs/12/catalog-pg-largeobject.html

as a superuser?


> Do you see anything in the data field?

select * from pg_largeobject

running as superuser returs empty table with 3 columns:

loid, pageno and data

Andrus.




Re: permission denied for large object 200936761

2021-02-02 Thread Andrus

Hi!

You can extract it with

  \lo_export 200936761 'somefile'

in psql and examine the file.

Ask the people who use that database!


Tried

root@c202-76:~# ./pgsqlkaiv.sh

psql (12.2 (Debian 12.2-2.pgdg100+1))
Type "help" for help.

sba=#  \lo_export 200936761 'large200936761'
lo_export
sba=# \q

root@c202-76:~# ls -l large*
-rw-r--r-- 1 root root 0 veebr  2 10:45 large200936761

result file size is 0 .

Andrus.



Re: permission denied for large object 200936761

2021-02-01 Thread Andrus

Hi!

> What code changed between the last backup and today?

I have imported data from other clusters and executed lot of different 
sql commands.  I have used grant, revoke, reassign commands to change 
privileges for other users and have deleted and added users.


Cluster contains 25 databases. There are 50 users executing SELECT, 
INSERT, UPDATE, DELETE, CREATE, DROP  and similar commands.


dumpuser has rights only to one database and two schemas (public and 
firma74 ).




You could try some of the functions here:

https://www.postgresql.org/docs/12/lo-funcs.html

to see if you can figure it out.


There is only one function , lo_get() in this page which returns data. I 
tried


select * from lo_get(200936761);

select * from lo_get(200936762);

select * from lo_get(200936767);

Those queries returned one row containing one zero-length column lo_get.

Andrus.




Re: permission denied for large object 200936761

2021-02-01 Thread Andrus

Hi!

>Obviously large objects *are* used.


>You have to grant the database use permissions with

>  GRANT SELECT ON LARGE OBJECT 200936761 TO dumpuser;

>Alternatively, use the -B option of pg_dump to skip dumping
>large objects.


I added -B option and changed postgresql.conf  to 
lo_compat_privileges=on   as temporary fix.


dumpuser has created backups of same two schemas for years without 
issues. The issue starts to occur today.


Application does not create large objects. It uses bytea columns instead.

How to figure out what is this large object ?

Andrus.



Re: permission denied for large object 200936761

2021-02-01 Thread Andrus
>I misspoke earlier about large objects not being tied to a 
schema.table. They can be as a column of type oid.


To see if they are try :

SELECT
    relname,
    attname
FROM
    pg_attribute AS pa
    JOIN pg_class AS pc ON pa.attrelid = pc.oid
WHERE
    atttypid = 'oid'::regtype
    AND relnamespace = 'public'::regnamespace
    AND attnum > 0;

Where relnamespace is the schema you are interested in.


pg_dump which throws error is called with -n public -n firma74 parameters

I tried

SELECT
    relname,
    attname
FROM
    pg_attribute AS pa
    JOIN pg_class AS pc ON pa.attrelid = pc.oid
WHERE
    atttypid = 'oid'::regtype
    AND relnamespace in ( 'public'::regnamespace, 'firma74'::regnamespace )
    AND attnum > 0;

It returs 0 rows.

Andrus.


Re: permission denied for large object 200936761

2021-02-01 Thread Andrus

Hi!

>Long term figure out what they are and if they are needed or not.

Non-superuser backup worked earlier. It looks like large objects 
suddenly appeared in database:


select * from  pg_largeobject_metadata

Oid  Lomowner

200936761   30152
200936762  30152
200936767   30152

How to figure out what are large object with oids   200936761, 200936762 
and 200936767 ?


Pd_dump throws error on first of them: 200936761

Andrus.



Re: permission denied for large object 200936761

2021-02-01 Thread Andrus

Hi!


 >Large objects exist independent of those. The important part of the 
above is lomowner. Use that oid to find the role that owns the 
objects here:


/select rolname from pg_roles where oid = 30152 ; // 



Not sure what the above is supposed to be doing?



I showed the user definition


It havent created any large objects.


What hasn't created large objects?

I




How to use this information to fix the issue ?


Do the pg_dump as user clusteradmin.


This works. However I need to allow non-supoeruser to create backup 
also. How to do this ?


Andrus.




Re: permission denied for large object 200936761

2021-02-01 Thread Andrus

Hi!

>Large objects exist independent of those. The important part of the 
above is lomowner. Use that oid to find the role that owns the objects 
here:


/select rolname from pg_roles where oid = 30152 ; //
/

returns my role ,  clusteradmin . I have superuser rights:

CREATE ROLE clusteradmin WITH
  LOGIN
  SUPERUSER
  INHERIT
  NOCREATEDB
  NOCREATEROLE
  NOREPLICATION:

GRANT db1_owner, db2_owner, ... to  clusteradmin;

It havent created any large objects.

How to use this information to fix the issue ?

Andrus.


Re: permission denied for large object 200936761

2021-02-01 Thread Andrus

Hi!

>Well the user that runs the pg_dump needs to have permissions on the 
large objects. For more information see below.


How to add permissions to non-superusers for this.?

GRANT command

GRANT { { SELECT | UPDATE } [, ...] | ALL [ PRIVILEGES ] }
ON LARGE OBJECT/|loid|/  [, ...]
TO/|role_specification|/  [, ...] [ WITH GRANT OPTION ]

Requires large object id. How to grant backup privilege if there are no 
large objects in database ?


Should *lo_compat_privileges
*

set in postgresql.conf  or is there better way ?


-n public -n firma74

command line options are used.



You can query that to see what is there. I would not go about 
deleting until you find what the large objects are for.



select * from pg_largeobject

returns empty table.



I haven't used large objects in a while. Forgot that they now have 
permissions associated with them. Try:


https://www.postgresql.org/docs/12/catalog-pg-largeobject-metadata.html

instead.


select * from  pg_largeobject_metadata

returns 3 rows:

Oid  Lomowner

  200936761   30152

  200936762  30152

  200936767   30152

How to find table and schema which is referenced by this ?

Andrus.




Re: permission denied for large object 200936761

2021-02-01 Thread Andrus

Hi!


Database does not contain large objects.

pg_dump starts to throw error

ERROR:  permission denied for large object 200936761


Did you do the pg_dump as a superuser?


No.

pg_dump needs to be invoked by non-superuser also. It backs up two 
schemas, public and firma74 .


-n public -n firma74

command line options are used.



You can query that to see what is there. I would not go about deleting 
until you find what the large objects are for.



select * from pg_largeobject

returns empty table.

Database has approx 50 schemas and many thousands of tables.

Andrus.



permission denied for large object 200936761

2021-02-01 Thread Andrus

Hi!

Database does not contain large objects.

pg_dump starts to throw error

ERROR:  permission denied for large object 200936761

Tried

select * from "200936761"

but it returned "relation does not exist"

How to fix this ? How to find which table causes this error ?

How to find and delete all large objects in database ? Maybe it is 
created accidently .


Using

 PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, 
compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit


Andrus.



REASSIGN OWNED BY in current database only

2021-01-14 Thread Andrus

Hi!

Cluster contains lot databases.

All objects in database should owned by separate role  whose name is in form

databasename_owner

where databasename is this database name in cluster.

This role in granted to cluster users who should have acces to this 
database.


Database is restored  from other cluster where is has different name.

After that

REASSIGN OWNED BY originaldbname_owner TO restoreddbname_owner

is used to force this role in restored database.

This command changes also owner of global objects also.

How to change  owner only in current database, leaving global objects 
owner unchanged ?


Andrus.


Error messages on duplicate schema names

2021-01-06 Thread Andrus

Hi!

ALTER DEFAULT PRIVILEGES IN SCHEMA public,public   GRANT all ON TABLES 
TO testoig;


Throws strange error

Tuple already updated by self

In other case which I posted duplicate schema causes another strange error

duplicate key value violates unique constraint 
"pg_default_acl_role_nsp_obj_index"DETAIL: Key (defaclrole, 
defaclnamespace, defaclobjtype)=(30152, 186783649, r) already exists.


Should duplicate schema names accepted or should their usage throw 
better error messages.


Andrus.



duplicate key value violates unique constraint pg_default_acl_role_nsp_obj_index

2021-01-05 Thread Andrus
efon,language,vabakuup,kasilfirma) on 
kasutaja to "testuser";

grant insert on logifail to "testuser";

Using

PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, 
compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit


Andrus.


Include 3 previous tokens in syntax error message

2020-12-27 Thread Andrus

Hi!

Postgres returns unreadable syntax error messageges like

Syntax error at or near ')'

If there are many ) characters in query, it is not possible to find the 
palce where error occurs.


STATEMENT_POSITION is difficult to use since drivers perform parameter 
replacements which makes this different from source position.


How to fix this so that 3 last tokens are returned in message like

Syntax error at or near ' i > )'

Andrus.


How to distribute products to shop by amount of sales

2020-08-16 Thread Andrus

Hi!

Table of product types  contains desired quantities in shop

    create temp table producttype (
 productype char(10) primary key,
 desired integer check ( desired> 0)
    ) on commit drop ;
    insert into producttype values ( 'SHOE',3);

product table contains products and product types

    create temp table product  (
    product char(20) primary key,
    producttype char(10) references producttype
    ) on commit drop;

    insert into product values 
('SHOE1','SHOE'),('SHOE2','SHOE'),('SHOE3','SHOE'),('SHOE4','SHOE'),('SHOE5','SHOE');


Warehouse table contains quantities in warehouse to distribute

    create temp table warehouse (
  product char(20) primary key references product,
  quantity integer check ( quantity> 0)
    ) on commit drop ;
    insert into warehouse values ('SHOE1',50),('SHOE2',60),('SHOE3',70);

Shop table contains quantities in shop

    create temp table shop (
  product char(20) primary key references product,
  quantity integer check ( quantity> 0)
    ) on commit drop;
    insert into shop values ('SHOE4',1);
    insert into shop values ('SHOE5',1);

sales table contains sold quantities. Most sold intems should moved from 
stock  first


    create temp table sales (
  product char(20) primary key references product,
  quantity integer check ( quantity> 0)
    ) on commit drop;
    insert into sales values ('SHOE1',100),('SHOE2',200);

How to find product which should moved from warehouse to shop so that 
shop status will be increated to producttype.desired

quantity for products in warehouse ? Most sold products should moved first.
Only one product (quantity 1) should moved from each product code.

Using data abouve, there should be 3 shoes (producttype.desired) in shop 
but are only 2 (sum(shop.quantity) for shoes).

Most sold shoe in warehouse is SHOE2
So SHOE2 should be moved to shop from warehouse.

How to find products which should moved ?
Can some SELECT with window function used for this ?

PostgreSQL 9.3.5 is used.

Andrus.

Posted also in 
https://stackoverflow.com/questions/63433824/how-to-distribute-products-to-shop-by-amount-of-sales




How to create function returning numeric from string containing percent character

2020-07-22 Thread Andrus

val function should return numeric value from string up to first non-digit 
character, considering first decimal point also:

   val('1,2TEST')  should return 1.2
   val('1,2,3')  should return 1.2
   val('-1,2,3')  should return -1.2

I tried

   CREATE OR REPLACE FUNCTION public.VAL(value text)
 RETURNS numeric AS
   $BODY$
   SELECT coalesce(nullif('0'||substring(Translate($1,',','.'), 
'^-?[0-9]+\.?[0-9]*$'),''),'0')::numeric;
   $BODY$ language sql immutable;

but if string contains % character,

   select val('1,2%')

returns 0.

How to force it to return 1.2 ?

It should work starting from Postgres 9.0

Posted also in

https://stackoverflow.com/questions/63032072/how-to-create-function-returning-value-up-to-first-non-digit-decimal-charcater#63032126

Andrus.





How to get previous log file

2020-06-28 Thread Andrus

Hi!

select pg_read_file(pg_current_logfile()) 


retrieves today log file.
Log files are in log directory:

/var/lib/postgresql/12/main/log# ls
...
postgresql-2020-06-08_00.log  postgresql-2020-06-18_00.log  
postgresql-2020-06-28_00.log
postgresql-2020-06-09_00.log  postgresql-2020-06-19_00.log

How get yesterday log file from remote client application using postgresql 
query ?
Using Postgres 12 in Debian.

Andrus.





canceling statement due to conflict with recovery after pg_basebackup

2020-06-03 Thread Andrus

Hi!

Async binary replication hot standby was started after pg_basebackup.
Running query in slave throws error 


ERROR:  canceling statement due to conflict with recovery

Why ?
Query should return table and other sizes in decreasing order.
How to improve it so that this error does not occur.


Log:

2020-06-03 09:40:52 EEST  LOG:  database system was interrupted; last known up 
at 2020-06-03 07:59:56 EEST
2020-06-03 09:41:10 EEST  LOG:  entering standby mode
2020-06-03 09:41:10 EEST  LOG:  redo starts at 2E2/28
2020-06-03 09:41:19 EEST  LOG:  consistent recovery state reached at 2E2/B5A56C8
2020-06-03 09:41:19 EEST  LOG:  database system is ready to accept read only 
connections
2020-06-03 09:41:19 EEST  LOG:  started streaming WAL from primary at 
2E2/C00 on timeline 1
2020-06-03 09:54:23 EEST 85.253.131.166 user@sba ERROR:  canceling statement 
due to conflict with recovery
2020-06-03 09:54:23 EEST 85.253.131.166 user@sba DETAIL:  User query might have 
needed to see row versions that must be removed.
2020-06-03 09:54:23 EEST 85.253.131.166 user@sba STATEMENT:  select
company_name(n.nspname)::char(20) as company,
 relname::char(25),
   pg_size_pretty(pg_total_relation_size(c.oid))::char(10) as totalsize,
n.nspname::char(12),
   case
  when c.relkind='i' then 'index'
   when c.relkind='t' then 'toast'
  when c.relkind='r' then 'table'
  when c.relkind='v' then 'view'
  when c.relkind='c' then 'composite type'
  when c.relkind='S' then 'sequence'
   else c.relkind::text
 end ::char(14) as "type"
from
   pg_class c
   left join pg_namespace n on n.oid = c.relnamespace
   left join pg_tablespace t on t.oid = c.reltablespace
where
   (pg_total_relation_size(c.oid)>>21)>0 and c.relkind!='t'
order by
   pg_total_relation_size(c.oid) desc

Andrus.




Re: How to start slave after pg_basebackup. Why min_wal_size and wal_keep_segments are duplicated

2020-06-01 Thread Andrus

Hi!


How to create replication server ?
I always do it this way and it work for me:
$ pg_basebackup -h ${PGHOST} -p ${PGPORT} -U replicator -W -R -D ${PGDATA} -P 
-v -Fp -Xs
After that, I edit ${PGDATA}/postgresql.conf and (w/ PostgreSQL 11 and older 
${PGDATA}/recovery.conf) to make it do what I want and
then I just launch it:
$ pg_ctl start


My script does the same thing as your comands.


From that moment onward, it replicates and applies to the replica. Checks in 
pg_stat_replication on the master and
pg_stat_wal_receiver on the replica >confirm that. They also show the WAL 
switches.
To provoke a WAL switch I always do:
postgres=# checkpoint; select pg_switch_wal();
CHECKPOINT
pg_switch_wal
I just don't understand what you're trying to achieve here.


I want to create replication server.


My guess is, you want to stop and backup the old database cluster,


Old cluster is empty, from initdb. Backup is not needed


then create a new one in its old directory, right?


pg_basebackup creates new main directory.


In this case, you probably need to change your script to something like this:
PGHOST=remote.example.com
PGPASSWORD=mypass
PGUSER=replikaator
PGDATA=/var/lib/postgresql/12/main
export PGHOST PGPASSWORD PGUSER PGDATA
/etc/init.d/postgresql stop
mv ${PGDATA} /var/lib/postgresql/12/mainennebaasbakuppi
pg_basebackup -h ${PGHOST} -p ${PGPORT} -U ${PGUSER} -W -R -D ${PGDATA} -P -v 
-Fp -Xs
/etc/init.d/postgresql start


pg_basebackup uses environment varuables if not specified in command line. So 
my script does the same thing.


Note that my invocation of pg_basebackup asks for the replicator password. This 
is intended. You'd probably want to change that.
Also, no need to play around with ownership and permissions. Do it as "postgres", not as 
"root".


I tried

sudo --user=postgres pg_basebackup 

but got error

could not change directory to "/root": Permission denied

Andrus.





Re: How to start slave after pg_basebackup. Why min_wal_size and wal_keep_segments are duplicated

2020-05-31 Thread Andrus

Hi!


I want to create  hot standby async server using
/etc/init.d/postgresql stop
mv /var/lib/postgresql/12/main /var/lib/postgresql/12/mainold
pg_basebackup --write-recovery-conf -D /var/lib/postgresql/12/main

I don't see where the base backup is being taken from just where it is going.


It is taken from VPS server over 20 Mbit public internet connection.
Both servers are running Debian Linux.


I dont receive WALs.
If you are doing binary replication then you are receiving WALs. It just a matter of whether you are streaming them or shipping 
them over complete.


Using wal_keep_segments=360 also causes same wal file not found error after 
pg_basebackup.
master server has 6GB wal files.  wal log during pg_basebackup is much slower 
than 360.
Maybe pg_basebackup skips wal segments . Maybe using wal_compression=on causes 
the issue.

How to fix this ?
How to create base backup so that cluster is tranferred over internet faster? Maybe it can transferred in compressed form over 
internet.


Andrus.




Re: How to start slave after pg_basebackup. Why min_wal_size and wal_keep_segments are duplicated

2020-05-31 Thread Andrus

Hi!

Will wal_keep_segments  keep segments also if named replication slot is 
lot used ?
Well if you are using a replication slot there is no point in using 
wal_keep_segments. Slots where created in, part at least, so you did not 
have to guess at a wal_keep_segments number.


I dont use slot.

To really answer this we will need to see the exact commands you are 
using and the sequence they are done in.


Replication server is created using

/etc/init.d/postgresql stop
mv /var/lib/postgresql/12/main /var/lib/postgresql/12/mainold
pg_basebackup --write-recovery-conf -D /var/lib/postgresql/12/main
chmod --recursive --verbose 0700 /var/lib/postgresql/12/main
chown -Rv postgres:postgres /var/lib/postgresql/12/main
/etc/init.d/postgresql start

Andrus.





Re: How to start slave after pg_basebackup. Why min_wal_size and wal_keep_segments are duplicated

2020-05-31 Thread Andrus

Hi!


In addition to my most recent questions:
What are you trying to achieve?


I want to create  hot standby async server using 


/etc/init.d/postgresql stop
mv /var/lib/postgresql/12/main /var/lib/postgresql/12/mainold
pg_basebackup --write-recovery-conf -D /var/lib/postgresql/12/main
chmod --recursive --verbose 0700 /var/lib/postgresql/12/main
chown -Rv postgres:postgres /var/lib/postgresql/12/main
/etc/init.d/postgresql start


In other words why do a pg_basebackup if you have a standby receiving WALs?


I dont receive WALs.

Andrus.




Re: How to start slave after pg_basebackup. Why min_wal_size and wal_keep_segments are duplicated

2020-05-31 Thread Andrus

Hi!


I'm guessing are looking for:
https://www.postgresql.org/docs/12/runtime-config-replication.html
26.2.6. Replication Slots
Replication slots provide an automated way to ensure that the master 
does not remove WAL segments until they have been received by all 
standbys, and that the master does not remove rows which could cause a 
recovery conflict even when the standby is disconnected.


Using replication slot can cause pg_wal directoy to occupy all free disk space 
and after that server stop respondig.


This is spelled out here:
https://www.postgresql.org/docs/12/warm-standby.html#STREAMING-REPLICATION
"If you use streaming replication without file-based continuous 
archiving, the server might recycle old WAL segments before the standby 
has received them. If this occurs, the standby will need to be 
reinitialized from a new base backup. You can avoid this by setting 
wal_keep_segments to a value large enough to ensure that WAL segments 
are not recycled too early, or by configuring a replication slot for the 
standby. 


Will wal_keep_segments  keep segments also if named replication slot is lot 
used ?

Andrus.




How to start slave after pg_basebackup. Why min_wal_size and wal_keep_segments are duplicated

2020-05-31 Thread Andrus

Hi!

pg_basebackup takes 8 hours.
After it is finished,  replication slave does not start:

LOG:  consistent recovery state reached at 2DE/985A5BE0
LOG:  database system is ready to accept read only connections
LOG:  started streaming WAL from primary at 2DE/9900 on timeline 1
replikaator@[unknown] LOG:  received replication command: SHOW 
data_directory_mode
replikaator@[unknown] LOG:  received replication command: IDENTIFY_SYSTEM
replikaator@[unknown] LOG:  received replication command: START_REPLICATION 
2CF/E900 TIMELIN
replikaator@[unknown] ERROR:  requested WAL segment 000102CF00E9 
has already been re
replikaator@[unknown] LOG:  received replication command: SHOW 
data_directory_mode
replikaator@[unknown] LOG:  received replication command: IDENTIFY_SYSTEM
replikaator@[unknown] LOG:  received replication command: START_REPLICATION 
2CF/E900 TIMELIN
replikaator@[unknown] ERROR:  requested WAL segment 000102CF00E9 
has already been removed
...

i tried it again and same error occured.

How to force replication to start?

I increased wal parameters in master to

wal_compression=on
max_wal_size = 5GB
min_wal_size = 4GB # was 80MB
wal_keep_segments= 360 # was 180

Will this allow replication to start after pg_basebackup ?

According to doc min_wal_size and wal_keep_segments both keep the minimum 
number of wal segments for replication.
Why those parameters are duplicated?

Andrus.




Re: Query returns no rows in pg_basebackup cluster

2020-05-25 Thread Andrus

Hi!


How to set logical replication for all user databases in cluster so that
when new database is added or new tables are added to database they will
start replicate automatically ?

I think that it would be good if you spend some time reading the
documentation on this stuff, particularly the part about restrictions,
to understand the use cases where that can become useful:
https://www.postgresql.org/docs/devel/logical-replication.html


Thank you.
I read it and havent found any reference to PITR recovery.
For PITR recovery it should probably save sql statements to files and allow to 
specify recovery target time
for applying sql statements to base backup.

Is PITR recovery supported only using binary WAL files ?

Other limits can probably be solved.

Andrus.




Re: Query returns no rows in pg_basebackup cluster

2020-05-25 Thread Andrus

Hi!


No.  Physical copies need to be based on the same platform.

Does the O/S that the client software runs on really affect this?

To the extent that the O/S determines text sort order, yes; see thread.
The short answer here is that we aren't going to support such cases.
If you try to replicate across platforms, and it works, you're in luck.
If it doesn't work, you get to keep both pieces; we will not accept
that as a bug.


In 2017 Peter wrote that  ICU-based collations will offered alongside the 
libc-based collations (1)
Currently it still requires re-compilation of Postgres for all binary 
replication platforms.
Maybe ICU locale will selected during installation automatically in Postgres 13 . Using same ICU locale in all replication platforms 
will hopefully fix the issue.


Currently option is to use ucs_basic as default collation when creating cluster.

(1) https://www.2ndquadrant.com/en/blog/icu-support-postgresql-10/

Andrus. 






Re: Query returns no rows in pg_basebackup cluster

2020-05-25 Thread Andrus

Hi!


Backup in created in Windows from Linux server using pg_receivewal and 
pg_basebackup .
Can this backup used for PITR in Linux ?

No.  Physical copies need to be based on the same platform.  If you
wish to replicate a cluster without any platform, architecture or even
not-too-many major version constraints, there is also logical
replication available since v10.


Will logical replication also allow two modes:
 1. PITR recovery can used if needed
 2. Hot standby: User databases in both clusters contain same data.

How to set logical replication for all user databases in cluster so that when new database is added or new tables are added to 
database they will start replicate automatically ?

Will it require more powerful backup server to replay main server sql stream 
from different databases.

Andrus. 






Re: Query returns no rows in pg_basebackup cluster

2020-05-22 Thread Andrus

Hi!


ERROR:  item order invariant violated for index "desktop_baas_liigid_idx"
DETAIL:  Lower index tid=(3,15) (points to index tid=(16,4098)) higher index tid=(3,16) (points to index tid=(17,4098)) page 
lsn=292/630C0CE8.

SQL state: XX002

Uh huh ... and I'll bet the same test on the source server is just fine?
I don't find it surprising in the least that different platforms have
different ideas on fine points like how to sort a leading underscore.
Those things just aren't that well standardized.


This column is not used for locale specific data.

Running 

alter table desktop alter  baas type char(8) collate ucs_basic 

fixes the issue. 
Is this fix reasonable ?

What other issues may occur ?

Can base backup created in windows using pg_basecakup used in Linux without 
such fix?

Andrus.




Re: Query returns no rows in pg_basebackup cluster

2020-05-22 Thread Andrus

Hi!


The sorting rules for this locale must be the same in both platforms.
Only locale names are different.

I think they are less alike than you hoped, because if they were alike,
you wouldn't be seeing this problem.
Possibly you could try running contrib/amcheck on the index in question
and see if it reports any issues.


I tried and it reports error

ERROR:  item order invariant violated for index "desktop_baas_liigid_idx"
DETAIL:  Lower index tid=(3,15) (points to index tid=(16,4098)) higher index tid=(3,16) (points to index tid=(17,4098)) page 
lsn=292/630C0CE8.

SQL state: XX002

Andrus.





Re: Query returns no rows in pg_basebackup cluster

2020-05-22 Thread Andrus

Hi!


Database in Windows is in read-only (recovery) mode so it cannot changed.

Then you might as well just rm -rf it (or whatever the equivalent Windows
incantation is).  On Windows, that database is broken and useless.


Backup in created in Windows from Linux server using pg_receivewal and 
pg_basebackup .
Can this backup used for PITR in Linux ?

Andrus.




Re: Query returns no rows in pg_basebackup cluster

2020-05-22 Thread Andrus

Hi!


Main server is in Linux and backup server is in windows.

This is not a supported setup if you want to run a physical backup.
Your backup and your primary need to be the same - software and hardware.  
Consider anything that is working to be a false
negative – assume >something will break or simply give incorrect results.


This base backup should used for recovery. Taking new base backup in Linux does 
not allow to recover to earlier date.
Both servers have Intel 64 bit CPUs.
I understand that only issue is the index structure and that REINDEX will fix 
this.
What other issues may occur ?

Will pg_dump/pg_restore in Windows server fix all issues.

Andrus. 






Re: Query returns no rows in pg_basebackup cluster

2020-05-21 Thread Andrus

Hi!


No, what it sounds like is the OP tried to physically replicate a
database on another platform with completely different sorting rules.


The sorting rules for this locale must be the same in both platforms.
Only locale names are different. It looks like windows server does not 
recognize Linux locale name.


Which means all his text indexes are corrupt according to the
destination platform's sorting rules, which easily explains the
observed misbehavior (ie, index searches not finding the expected rows).


Lot of queries seems working properly.

REINDEX would fix it. 


REINDEX throws error

ERROR:  cannot execute REINDEX during recovery
SQL state: 25006


But the major point here is you can't just ignore
a collation mismatch, which in turn implies that you can't do physical
replication from Linux to Windows, or vice versa (and most other
cross-platform cases are just as dangerous).


Database is used in recovery mode to find proper recovery point and to get data 
from it in this point.
Locales are actually same. In windows Postgres does not recognize Linux locale 
name.


Database in Windows is in read-only (recovery) mode so it cannot changed.

Then you might as well just rm -rf it (or whatever the equivalent Windows
incantation is).  On Windows, that database is broken and useless.


Most queries seems to work. 
Database should examined to get accidently deleted data from it.


Is making it read-write and index only solution or can it fixed in read-only 
database also, e-q forcing same local in postgres.conf

Andrus.




Re: Query returns no rows in pg_basebackup cluster

2020-05-21 Thread Andrus

Hi!


The LIKE query probably doesn't use an index and thus finds the relevant
data via sequential scan and equality checks on each record.



Yeah, exactly.  An equality condition will use a btree index if
available.  LIKE, however, sees the "_" as a wildcard so it cannot
use an index and resorts to a seqscan --- which will work fine.
It's just index searches (and index-based sorts) that are broken.
Of course, if there isn't an index on the column in question
then this theory falls to the ground.


There is composite index on baas column

CREATE TABLE public.desktop
(
   id integer NOT NULL DEFAULT nextval('desktop_id_seq'::regclass),
   recordtype character(5) COLLATE pg_catalog."default" NOT NULL,
   klass character(1) COLLATE pg_catalog."default",
   baas character(8) COLLATE pg_catalog."default" NOT NULL,
   liigid character(1) COLLATE pg_catalog."default" NOT NULL DEFAULT ''::bpchar,
   jrk numeric(4,0) NOT NULL DEFAULT 0,
...
   CONSTRAINT desktop_pkey PRIMARY KEY (id),
   CONSTRAINT desktop_baas_not_empty CHECK (baas <> ''::bpchar),
   CONSTRAINT desktop_id_check CHECK (id > 0),
   CONSTRAINT desktop_recordtype_check CHECK (recordtype = 'Aken'::bpchar OR 
recordtype = 'Veerg'::bpchar)
)

TABLESPACE pg_default;
CREATE INDEX desktop_baas_liigid_idx
   ON public.desktop USING btree
   (baas COLLATE pg_catalog."default" ASC NULLS LAST, liigid COLLATE 
pg_catalog."default" ASC NULLS LAST)
   TABLESPACE pg_default;

Maybe it is possible to force postgres in windows to use the same locale as in 
Linux. Locales are actually the same.

Andrus.





Re: Query returns no rows in pg_basebackup cluster

2020-05-21 Thread Andrus

Hi!


Are you referring to two different instances of Postgres on Windows?


No. 
Main server is in Linux and backup server is in windows.


Andrus.




  1   2   >