Re:Re: Different sort result between PostgreSQL 8.4 and 12.5

2022-06-27 Thread gzh



Dear Magnus:



The information you sent through to me was perfect. 
After I checked the operating system, I found that they are really different.


--PostgreSQL 8.4
LANG=ja_JP.UTF-8


--PostgreSQL 12.5
LANG=en_US.UTF-8


After I added the following syntax after the "order by ascid" in PostgreSQL 
12.5 database, I got the same result as PostgreSQL 8.4


COLLATE "ja-JP-x-icu"


Thank you for being so helpful.







 2022-06-27 19:33:01,"Magnus Hagander"  :





On Mon, Jun 27, 2022 at 1:31 PM gzh  wrote:


Hi, 




I have had a Perl Website working for 7 years and have had no problems

until a few weeks ago I replaced my database server with a newer one.




Database server (old): PostgreSQL 8.4 32bit

Database server (new): PostgreSQL 12.5 64bit




I run following sql in PostgreSQL 8.4 and PostgreSQL 12.5, it returns different 
sort results.




--PostgreSQL 8.4

---

pg_db=# select ascid from test_order where oo_m.ascid in ('"! ascid"','"001"') 
order by ascid;

   ascid

---

 "! ascid"

 "001"

(2 rows)




--PostgreSQL 12.5

---

pg_db=# select ascid from test_order where oo_m.ascid in ('"! ascid"','"001"') 
order by ascid;

   ascid

---

 "001"

 "! ascid"

(2 rows)




What is the reason for this and is there any easy way to maintain compatibility?





Are these two really running on the same operating system?


This looks a lot like the locale changes included in newer versions of glibc, 
and is in that case dependent on an upgrade of the operating system, not an 
upgrade of PostgreSQL. See https://wiki.postgresql.org/wiki/Locale_data_changes 
for details.
 
--

 Magnus Hagander
 Me: https://www.hagander.net/
 Work: https://www.redpill-linpro.com/

Re: Libpq question related to allocated resources

2022-06-27 Thread Tom Lane
Karl Denninger  writes:
> But -- I still have a /lot /of memory out on the heap according to 
> jemalloc stats that is not being deallocated, and what's worse is that 
> if I rig the code to call PQfinish and then PQconnect once again I get 
> /even more /imbalanced allocate/free counts (and the memory use in said 
> buckets to go with them.)

Hmmm ... I'm not aware of any memory leaks in libpq, but that doesn't
mean there are none.  Of course, if you're forgetting to PQclear()
some PGresults, that's not libpq's fault ;-).

> The obvious question, given the warnings in the FastCGI library: Does 
> libpq /modify /the process environment?

No.  At least, I see no setenv() calls in it, and I think that it'd
be pretty unfriendly for a library to do that to its host application.

> A quick grep implies that indeed it may in 
> backend/libpq/auth.c at least,

backend/libpq is unrelated to interfaces/libpq.  (I've seen hints
that they arose from a common code base, but if so, that was a
few decades and a lot of rewrites ago.)

regards, tom lane




Re: Unique index prohibits partial aggregates

2022-06-27 Thread David Rowley
On Mon, 27 Jun 2022 at 23:49, Bos, Fred  wrote:
> "Settings: effective_cache_size = '128GB', force_parallel_mode = 'on', 
> max_parallel_workers = '40',

You shouldn't be using force_parallel_mode. It does not do what you
think. See the documentation for that GUC, or read [1]

> I expected the query to become faster
> with a unique index or column, so why does the query planner decide on group
> aggregation instead of partial aggregation?

It just simply does not know how many groups are likely to exists on
your expression.  Statistics are only gathered on bare columns. The
planner has no idea how many groups are likely to exist for
"t/(1000*3600*24)".

In PostgreSQL 14 and above you could create extended statistics for
the expression using:

create statistics t1_t_stats (ndistinct) on (t/(1000*3600*24)) from
bhload_nohyp_noin; -- pg14
analyze bhload_nohyp_noin;

for your version (pg13), you'd need to create an expression index.

create index on bhload_nohyp_noin using brin ((t/(1000*3600*24)));
analyze bhload_nohyp_noin;

I added "using brin" as a brin index is going to be a bit more
lightweight than a btree index.  You only need the index to instruct
ANALYZE to gather statistics. You might also want to ramp up the pages
per range.

With that, the planner might then realise that parallel aggregate
might be worthwhile.

David

[1] 
https://www.enterprisedb.com/postgres-tutorials/using-forceparallelmode-correctly-postgresql




Re: Outer joins and NULLs (old subject "ERROR: failed to find conversion function from key_vals_nn to record[]")

2022-06-27 Thread Bryn Llewellyn
> david.g.johns...@gmail.com wrote:
> 
> Allowing domains to be defined as not null at this point is simply something 
> that we don't support but don't error out upon in the interest of backward 
> compatibility. (IMO, the documentation is not this strongly worded.) It, as 
> you note, has some corner-case bugs. You can avoid those bugs by simply not 
> using a non-null constraint as suggested.
> 
> ...At most we should probably go from saying "Best practice therefore..." to 
> "We no longer support setting a not null constraint on a domain but will not 
> error in the interest of not breaking existing uses that are careful to avoid 
> the problematic corner-cases”.

Thank you all for helping me see the problem here. I’d certainly welcome 
strengthening the doc’s admonition to use wording like "unsupported", 
"unpredictable results", and "just don’t do this".

I was simply confused. There's no other way to say it. Anyway, my foolish use 
of domains with "not null" constraints hadn't left the privacy of my own 
laptop—and I've expunged all those uses now.

Here’s what I now take from that "create domain" note:

When a datum (either an actual value or a NULL) is copied from a "container" 
whose data type is the domain "d" to another container with that same data 
type, the constraints that might have been defined for "d" are not re-checked.

I'm using "container" here as an ad hoc, and somewhat loose, umbrella term of 
art for any of these:

—row-column intersection in a table (or view)
—an attribute of a composite type instance
—a local variable in a PL/pgSQL subprogram or anonymous block 
—a PL/pgSQL subprogram's formal parameter
—a PL/pgSQL subprogram's return datum
—and so on

I quite like this compact illustration of the paradox. (I expect that everybody 
has their own favorite.)

create domain text_nn as text not null;

create view null_having_null_not_constraint(t_nn, dt) as
with
  c1(t_nn) as (
values('dog'::text_nn)),

  c2(t_nn) as (
select (select t_nn from c1 where null))

select
  t_nn, pg_typeof(t_nn)
from c2;

\pset null ''
select t_nn, dt from null_having_null_not_constraint;

This is the result:

 t_nn |   dt
--+-
  | text_nn

And I quite like this demo of the fact that copying a datum between containers 
with the same constrained domain data type doesn't re-check the constraints:

do $body$
declare
  t_var_nn text_nn := '';
begin
  t_var_nn := (select t_nn from null_having_null_not_constraint);
  assert (t_var_nn is null), 'Expected "t_var_nn" (paradoxically) to be NULL 
here';
end;
$body$;

I'll use « the "paradoxical" pattern », below, to denote the fact that you can 
find a NULL in a container whose datatype has a "not null" constraint.

 Here's the summary of my tests:

 
|—
||  
   |
|  (1) select t_nn, dt from  |  Follows the "paradoxical" 
pattern  |
|  null_having_null_not_constraint;  |  
   |
||  
   |
—|——
||  
   |
|  (2) select t_nn... into   |  
   |
|  text_nn local variable|  Follows the "paradoxical" 
pattern  |
||  
   |
—|——
||  
   |
|  (3) select t_nn... into   |  
   |
|  text_nn field in composite type   |  Follows the "paradoxical" 
pattern  |
||  
   |
—|——
||  
   |
|  (4) assign (select t_nn...) to|  
   |
|  text_nn IN formal |  Follows the "paradoxical" 
pattern  |
||  
   |
||——
||  
   |
|  (5) returning t_nn in function that   |  
   |
|  returns "text_nn" |  Follows the "paradoxical" 
pattern  |
||  

Libpq question related to allocated resources

2022-06-27 Thread Karl Denninger
I've got a fairly sizeable application that runs as a CGI app under 
Apache which I am attempting to convert to FastCGI.


"Once through and done" apps tend not to care much if deallocation is 
less than perfect, since exit(0) (or otherwise) tends to heal all 
wounds.  Not so much when something's running for hours, days or weeks.  
There memory leaks are ruinous.


I've wrapped all of my internal functionality plus all calls to 
PQexecParams and the escape/unescape functions, all of which must be 
deallocated after being used.  All counters after a pass through the 
code are zero (increment on allocate, decrement on PQfremem or PQclear.)


But -- I still have a /lot /of memory out on the heap according to 
jemalloc stats that is not being deallocated, and what's worse is that 
if I rig the code to call PQfinish and then PQconnect once again I get 
/even more /imbalanced allocate/free counts (and the memory use in said 
buckets to go with them.)


The application itself is nothing particularly fancy although it 
typically makes dozens of Postgres calls; single-threaded, no prepared 
statements or async requests.


This is under 14.1; I haven't rolled the code forward, but I see nothing 
in the notes implying there is a problem in libpq that has been 
corrected, or that there was one in the past in this regard. Its also 
possible that the FastCGI wrapper has a problem internally.  The app, 
when run under valgrind to do cron processing, comes back clean -- it 
does show allocations on exit however, although "still accessibl/e" /and 
those which do come up are related from OpenSSL's error string 
initialization in /lib/libcrypto.so (I'm on FreeBSD and openssl was 
specified as "--with-openssl" when Postgres was built.)


The obvious question, given the warnings in the FastCGI library: Does 
libpq /modify /the process environment?  Reading from it (provided you 
don't modify anything from the pointers you access; if you want to then 
you must copy them somewhere and make the modification outside of the 
environment itself) is perfectly fine but writing it, directly or 
indirectly, is NOT.  A quick grep implies that indeed it may in 
backend/libpq/auth.c at least, but I do not have ENABLE_GSS defined in 
my configuration so that code /shouldn't /be there.


--
Karl Denninger
k...@denninger.net
/The Market Ticker/
/[S/MIME encrypted email preferred]/

smime.p7s
Description: S/MIME Cryptographic Signature


Re: Table space not returned to the OS ?

2022-06-27 Thread Magnus Hagander
On Mon, Jun 27, 2022 at 12:01 PM Laurenz Albe 
wrote:

> On Mon, 2022-06-27 at 11:38 +0200, Magnus Hagander wrote:
> > On Mon, Jun 27, 2022 at 11:30 AM Florents Tselai <
> florents.tse...@gmail.com> wrote:
> > > A few months back (October) I had upgraded a Postgres instance from
> v12 —> 14.
> > >
> > > The database disk size under /var/lib/postgresql/12 was around 800GB+
> back then.
> > > Note, that IIRC I had used hard-linking during the upgrade.
> > >
> > > As I was running out of disk space, I started investigating and found
> out that
> > >
> > > /var/lib/postgresql/12/main/base/16385  —>  886GB+
> > > /var/lib/postgresql/14 —> 400GB
> >
> > It looks like you didn't actually delete the old cluster, which you are
> supposed
> > to do once you have verified that the new one works.
>
> I think that it should be done earlier than that, namely immediately after
> running
> pg_upgrade.  Once you have started the PostgreSQL 14 server (to verify
> that it works),
> you can no longer use the old cluster.
> Yes, the control file is crippled, but in my opinion, the earlier you
> delete the old
> cluster, the safer.
>

I'd say there is still some recoverable data in the old cluster files, even
if you can't just start up the cluster in it. But yes, it comes down to how
you define "verified that the new one works" to some level.

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: Different sort result between PostgreSQL 8.4 and 12.5

2022-06-27 Thread Magnus Hagander
On Mon, Jun 27, 2022 at 1:31 PM gzh  wrote:

> Hi,
>
>
> I have had a Perl Website working for 7 years and have had no problems
> until a few weeks ago I replaced my database server with a newer one.
>
>
> Database server (old): PostgreSQL 8.4 32bit
>
> Database server (new): PostgreSQL 12.5 64bit
>
>
> I run following sql in PostgreSQL 8.4 and PostgreSQL 12.5, it returns
> different sort results.
>
>
> --PostgreSQL 8.4
>
> ---
>
> pg_db=# select ascid from test_order where oo_m.ascid in ('"!
> ascid"','"001"') order by ascid;
>
>ascid
>
> ---
>
>  "! ascid"
>
>  "001"
>
> (2 rows)
>
>
> --PostgreSQL 12.5
>
> ---
>
> pg_db=# select ascid from test_order where oo_m.ascid in ('"!
> ascid"','"001"') order by ascid;
>
>ascid
>
> ---
>
>  "001"
>
>  "! ascid"
>
> (2 rows)
>
>
> What is the reason for this and is there any easy way to maintain
> compatibility?
>
>
Are these two really running on the same operating system?

This looks a lot like the locale changes included in newer versions of
glibc, and is in that case dependent on an upgrade of the operating system,
not an upgrade of PostgreSQL. See
https://wiki.postgresql.org/wiki/Locale_data_changes for details.

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Different sort result between PostgreSQL 8.4 and 12.5

2022-06-27 Thread gzh
Hi, 




I have had a Perl Website working for 7 years and have had no problems

until a few weeks ago I replaced my database server with a newer one.




Database server (old): PostgreSQL 8.4 32bit

Database server (new): PostgreSQL 12.5 64bit




I run following sql in PostgreSQL 8.4 and PostgreSQL 12.5, it returns different 
sort results.




--PostgreSQL 8.4

---

pg_db=# select ascid from test_order where oo_m.ascid in ('"! ascid"','"001"') 
order by ascid;

   ascid

---

 "! ascid"

 "001"

(2 rows)




--PostgreSQL 12.5

---

pg_db=# select ascid from test_order where oo_m.ascid in ('"! ascid"','"001"') 
order by ascid;

   ascid

---

 "001"

 "! ascid"

(2 rows)




What is the reason for this and is there any easy way to maintain compatibility?



Regards,


--




gzh

Re: Differences in Escaped bytea's when creating a plain pg_dump

2022-06-27 Thread Wolfgang Rißler

Am 27.06.2022 um 12:12 schrieb Daniel Verite:

WR wrote:


First run worked.
Second run worked.
Then I changed to SET standard_conforming_strings = off;
Third run worked.
Fourth run throw the error
Then I changed back to  SET standard_conforming_strings = on;
Fifth run throw the error too.
And only adding E and second backslash helped.


This kind of unstable behavior can be seen if the SET may not be
executed by the same session (same connection to the server) as the
subsequent queries.
SET affects only the session it's being run in.

For instance a connection pooler configured in statement mode may
produce that behavior. The solution in the case of a connection pooler
is to group related statements into a transaction.

Maybe pgAdmin has a pooler like that, but if you're issuing the statements
in the same SQL window, I would find it quite surprising that it doesn't
run them by the same session.
Or maybe you're mixing queries from different SQL windows that each
have their own connection, and in some cases you do the SET
in a window and the INSERT in another window.
Or maybe it's a pgAdmin bug.




Ok, thank you Daniel,

in all tests I put the SET statement before the INSERT in the same query 
tool and run it as a whole. (no statement was marked by mouse for single 
execution). So I agree with you, that one transmission is used by 
pgAdmin to run SET and INSERT.
I always had a SET before each INSERT, so there could be no unknown 
state of standard_conforming_strings, no matter if pdAdmin uses 
different connections for each run of the script or the same again.


The supplement of the second backslash in the c++ code costs me some 
nerves. Sending an statement std::string without doublebackslashes to 
pqxx::transaction results in the "0x00 is not a valid UTF8 sequence" 
error 21020 (as I wrote already). I'm afraid, I can't simply replace all 
backslashes in the whole dumptext with two of them.
Since I found out now, that psql works nice with my dumpfiles, maybe its 
easier, to call psql from my code and not to use a pqxx::transaction. So 
I can also save myself loading the dump from the sql-file. The more I 
think about it, the more I love this idea.


Thanks a lot, Wolfgang


--

Wolfgang Rißler
mailto: wolfgang.riss...@freenet.de
mobil: +49 1520 9191759
- may the source be with you -




Re: Question about attention to pgsql-hack...@lists.postgresql.org

2022-06-27 Thread Imre Samu
>From the original post
"-This whole system will uphold any precision, certainly ones within a very
large range limit, controlled by the already available type for large
positive integers, the BIGINT. It can thereby enumerate digits within the
range of
(+/-)1 to (+/-)9,223,372,036,854,775,807. This is at least between one and
positive nine quintilion digit places. More than enough for the speed and
scope of today, or maybe tomorrow, and the Desktop PC, as either a client
or a server."

As I know PostgreSQL 14 "field" size upper limit : 1 GB   ~=  1 000 000 000
byte  ( 1 byte --> 2 digits )
so this is a hard limitation on the PostgreSQL side.
https://www.postgresql.org/docs/current/limits.html

proposal for the first iteration:
- just using the Postgres "numeric" type (  "up to 131072 digits before the
decimal point; up to 16383 digits after the decimal point" )
  and it will be so much easier.

> I was wondering if anyone can or will pay some active attention to it?

Why is this important to you?
How are you expected to contribute to the proposed project?
What technical background do you have?

Regards,
 Imre


A Z  ezt írta (időpont: 2022. jún. 26., V, 12:37):

> Dear pgsql-general,
>
> I have successfully sent an (updated) email list message to
>
> *pgsql-hack...@lists.postgresql.org *
>
> which can be viewed in the archive here:
>
>
> https://www.postgresql.org/message-id/Rm7WBtZZpYL1NWuunuN_16EY0JcyejqV22z_JlUfvD5FYPenGDd_ZGUddwAcMNi2KNSyvqAhBRzj2JxtJoNmWAzykBQU4Z1AzBp0GPs8wZQ=@protonmail.com
>
> I was wondering if anyone can or will pay some active attention to it?
> i.e. what can I do from here to raise the message there to the attention
> of more people involved with pgsql-hackers, in order to get one or more
> active email replies in response to what I have posed/asked?
>
> *My email address is powerus...@live.com.au *,
>
> and my online name for the post is
>
> *Sergio Minervini*.
>
>
>


Re: help for pg_wal issue

2022-06-27 Thread Laurenz Albe
On Mon, 2022-06-27 at 09:50 +, Özge Özyavuz wrote:
> I want to backup my postgresql database but it fails because it is looking 
> for some wal files on pg_wal directory.
>  
> Before backup I checked my pg_wal directory and I saw the wal file,
>  
> bash-4.2$ cd /pgdata/pgsql/data/pg_wal
> bash-4.2$  ls
> -rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00AE
> -rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00AF
> -rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00B0
> -rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00B1
> -rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00B2
> -rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00B3
> -rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00B4
> -rw--- 1 postgres postgres 16777216 Jun 24 01:16 0001000A00B5
> -rw--- 1 postgres postgres 16777216 Jun 24 01:31 0001000A00B6
> -rw--- 1 postgres postgres 16777216 Jun 24 01:46 0001000A00B7
>  
> But backup fails with, 
>  
> Failed to open:  /pgdata//pgsql/data/pg_wal/0001000A00AE
> Error! The backup has failed.
>  
> After I checked pg_wal directory and really there is no such file, it was not 
> appear on disk. What can I do for this? Please help..
>  
> bash-4.2$ cd /pgdata/pgsql/data/pg_wal
> bash-4.2$  ls
> -rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00AF
> -rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00B0
> -rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00B1
> -rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00B2
> -rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00B3
> -rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00B4
> -rw--- 1 postgres postgres 16777216 Jun 24 01:16 0001000A00B5
> -rw--- 1 postgres postgres 16777216 Jun 24 01:31 0001000A00B6
> -rw--- 1 postgres postgres 16777216 Jun 24 01:46 0001000A00B7
> -rw--- 1 postgres postgres 16777216 Jun 24 02:01 0001000A00B8

You forgot to tell us how exactly you are performing that backup.

Yours,
Laurenz Albe




Re: Differences in Escaped bytea's when creating a plain pg_dump

2022-06-27 Thread Daniel Verite
WR wrote:

> First run worked.
> Second run worked.
> Then I changed to SET standard_conforming_strings = off;
> Third run worked.
> Fourth run throw the error
> Then I changed back to  SET standard_conforming_strings = on;
> Fifth run throw the error too.
> And only adding E and second backslash helped.

This kind of unstable behavior can be seen if the SET may not be
executed by the same session (same connection to the server) as the
subsequent queries.
SET affects only the session it's being run in.

For instance a connection pooler configured in statement mode may
produce that behavior. The solution in the case of a connection pooler
is to group related statements into a transaction.

Maybe pgAdmin has a pooler like that, but if you're issuing the statements
in the same SQL window, I would find it quite surprising that it doesn't
run them by the same session.
Or maybe you're mixing queries from different SQL windows that each
have their own connection, and in some cases you do the SET
in a window and the INSERT in another window.
Or maybe it's a pgAdmin bug.


Best regards,
-- 
Daniel Vérité
https://postgresql.verite.pro/
Twitter: @DanielVerite




help for pg_wal issue

2022-06-27 Thread Özge Özyavuz
Hi,

I want to backup my postgresql database but it fails because it is looking for 
some wal files on pg_wal directory.

Before backup I checked my pg_wal directory and I saw the wal file,

bash-4.2$ cd /pgdata/pgsql/data/pg_wal
bash-4.2$  ls
-rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00AE
-rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00AF
-rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00B0
-rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00B1
-rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00B2
-rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00B3
-rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00B4
-rw--- 1 postgres postgres 16777216 Jun 24 01:16 0001000A00B5
-rw--- 1 postgres postgres 16777216 Jun 24 01:31 0001000A00B6
-rw--- 1 postgres postgres 16777216 Jun 24 01:46 0001000A00B7

But backup fails with,

Failed to open:  /pgdata//pgsql/data/pg_wal/0001000A00AE
Error! The backup has failed.

After I checked pg_wal directory and really there is no such file, it was not 
appear on disk. What can I do for this? Please help..

bash-4.2$ cd /pgdata/pgsql/data/pg_wal
bash-4.2$  ls
-rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00AF
-rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00B0
-rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00B1
-rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00B2
-rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00B3
-rw--- 1 postgres postgres 16777216 Jun 24 01:01 0001000A00B4
-rw--- 1 postgres postgres 16777216 Jun 24 01:16 0001000A00B5
-rw--- 1 postgres postgres 16777216 Jun 24 01:31 0001000A00B6
-rw--- 1 postgres postgres 16777216 Jun 24 01:46 0001000A00B7
-rw--- 1 postgres postgres 16777216 Jun 24 02:01 0001000A00B8

Regards,

Some of my parameters on postgresql.conf file;

wal_buffers = 16MB
max_wal_size = 5GB
min_wal_size = 2GB
shared_buffers = 4GB
max_wal_senders = 10
wal_level = replica
wal_keep_segments = 50

archive_mode = on   # enables archiving; off, on, or always
archive_command = 'cp %p /pgdata/arc'   # command to use to archive a 
logfile segment
archive_timeout =0  # force a logfile segment switch after this





Özge Özyavuz
Profesyonel Hizmetler
Mimar
ozge.ozya...@softtech.com.tr
A: Tuzla Teknoloji ve Operasyon Merkezi İçmeler Mah. Piri Reis Cad. No:62 B 
Blok Kat:Zemin-1 34947
Tuzla / İstanbul Tel : +902165526000 Faks : +902165526597
www.softtech.com.tr
[cid:imagead0fb9.PNG@e770ed1e.45835018]




Bu e-posta mesajı ve ekleri gönderildiği kişi ya da kuruma özeldir ve gizlidir. 
Ayrıca hukuken de gizli olabilir. Hiçbir şekilde üçüncü kişilere açıklanamaz ve 
yayınlanamaz. Mesajın yetkili alıcısı değilseniz hiçbir kısmını kopyalayamaz, 
başkasına gönderemez veya hiçbir şekilde kullanamazsınız. Eğer mesajın yetkili 
alıcısı veya yetkili alıcısına iletmekten sorumlu kişi siz değilseniz, lütfen 
mesajı sisteminizden siliniz ve göndereni uyarınız. Gönderen ve SOFTTECH A.Ş., 
bu mesajın içerdiği bilgilerin doğruluğu, bütünlüğü ve güncelliği konusunda bir 
garanti vermemektedir. Mesajın içeriğinden, iletilmesinden, alınmasından, 
saklanmasından, gizliliğinin korunamamasından, virüs içermesinden ve 
sisteminizde yaratabileceği zararlardan şirketimiz sorumlu tutulamaz.

This e-mail and its attachments are private and confidential to the exclusive 
use of the individual or entity to whom it is addressed. It may also be legally 
confidential. Any disclosure, distribution or other dissemination of this 
message to any third party is strictly prohibited. If you are not the intended 
recipient, you may not copy, forward, send or use any part of it. If you are 
not the intended recipient or the person who is responsible to transmit to the 
intended recipient, please contact the sender by reply e-mail and destroy all 
copies of the original message and its attachments. The sender and SOFTTECH 
A.S. do not warrant for the accuracy, currency, integrity or correctness of the 
information in the message and its attachments. SOFTTECH A.S. shall have no 
liability with regard to the information contained in the message, its 
transmission, reception, storage, preservation of confidentiality, viruses or 
any damages caused in anyway to your computer system.



Re: Table space not returned to the OS ?

2022-06-27 Thread Laurenz Albe
On Mon, 2022-06-27 at 11:38 +0200, Magnus Hagander wrote:
> On Mon, Jun 27, 2022 at 11:30 AM Florents Tselai  
> wrote:
> > A few months back (October) I had upgraded a Postgres instance from v12 —> 
> > 14.
> > 
> > The database disk size under /var/lib/postgresql/12 was around 800GB+ back 
> > then.
> > Note, that IIRC I had used hard-linking during the upgrade.
> > 
> > As I was running out of disk space, I started investigating and found out 
> > that 
> > 
> > /var/lib/postgresql/12/main/base/16385  —>  886GB+ 
> > /var/lib/postgresql/14 —> 400GB
> 
> It looks like you didn't actually delete the old cluster, which you are 
> supposed
> to do once you have verified that the new one works.

I think that it should be done earlier than that, namely immediately after 
running
pg_upgrade.  Once you have started the PostgreSQL 14 server (to verify that it 
works),
you can no longer use the old cluster.
Yes, the control file is crippled, but in my opinion, the earlier you delete 
the old
cluster, the safer.

Yours,
Laurenz Albe




Re: Table space not returned to the OS ?

2022-06-27 Thread Florents Tselai


> On 27 Jun 2022, at 12:38 PM, Magnus Hagander  wrote:
> 
> 
> 
> On Mon, Jun 27, 2022 at 11:30 AM Florents Tselai  > wrote:
> Hi,
> 
> A few months back (October) I had upgraded a Postgres instance from v12 —> 14.
> 
> The database disk size under /var/lib/postgresql/12 was around 800GB+ back 
> then.
> Note, that IIRC I had used hard-linking during the upgrade.
> 
> In the database itself, lots of things have changed since. 
> In fact, that database itself has been dropped at some point and restored 
> from a backup.
> 
> As I was running out of disk space, I started investigating and found out 
> that 
> 
> /var/lib/postgresql/12/main/base/16385  —>  886GB+ 
> /var/lib/postgresql/14 —> 400GB
> 
> The last modification date on that directory (../12/) appears to be around a 
> month ago,
> When the table with relied 16385 was in fact dropped.
> 
> Now, In my update scripts (I use this db as an OLAP) I occasionally run 
> VACUUM.
> 
> Is it weird that the 886GB space hasn’t been returned to the OS yet? 
> 
> What’s the safest way to return it to the OS manually?
> 
> 
> When you use hardlinks in the upgrade all the files remain in the old 
> directory when they are removed from the new one such as when you drop a 
> relation. it is there for emergency recoveries. It's only the contents of the 
> files that's "mirrored", not the existance.
> 
> It looks like you didn't actually delete the old cluster, which you are 
> supposed to do once you have verified that the new one works. This looks like 
> a debian/ubuntu system, which means you probably forgot to run 
> "pg_dropcluster 12 main"? Or if it's not a debian cluster, the equivalent of 
> that which results in

Ah, you’re right 
pg_dropcluster 12 main && systemctl daemon-reload 
worked fine.

Thanks.

> removing the data directory for 12 along with any configuration files it has 
> elsewhere.
> 
> --
>  Magnus Hagander
>  Me: https://www.hagander.net/ 
>  Work: https://www.redpill-linpro.com/ 


Table space not returned to the OS ?

2022-06-27 Thread Florents Tselai
Hi,

A few months back (October) I had upgraded a Postgres instance from v12 —> 14.

The database disk size under /var/lib/postgresql/12 was around 800GB+ back then.
Note, that IIRC I had used hard-linking during the upgrade.

In the database itself, lots of things have changed since. 
In fact, that database itself has been dropped at some point and restored from 
a backup.

As I was running out of disk space, I started investigating and found out that 

/var/lib/postgresql/12/main/base/16385  —>  886GB+ 
/var/lib/postgresql/14 —> 400GB

The last modification date on that directory (../12/) appears to be around a 
month ago,
When the table with relied 16385 was in fact dropped.

Now, In my update scripts (I use this db as an OLAP) I occasionally run VACUUM.

Is it weird that the 886GB space hasn’t been returned to the OS yet? 

What’s the safest way to return it to the OS manually?

Thanks





Re: Differences in Escaped bytea's when creating a plain pg_dump

2022-06-27 Thread Wolfgang Rißler

Am 27.06.2022 um 09:32 schrieb David G. Johnston:

[snip]


I suggest doing self-contained examples that demonstrate the documented 
behavior not working as documented (or not being functional even if 
intended) to pinpoint any bug that might be lurking here.  With only 
fragments and statements that seem impossible we are left to assume 
operator error.  pg_dump is completely correct in what it is producing 
(non-escape literal \000).


I also suggest using psql and pg_dump directly, and not pgAdmin, to 
demonstrate a core PostgreSQL bug.


David J.



Thank you David,
I followed you advice, using pg_dump and psql directly. And the in 	in 
contrast to pgAdmin psql works like expected and reproducable again and 
again.

With
SET standard_conforming_strings = on;

an INSERT without E and double backslash works.

SET standard_conforming_strings = off;

I get the warning and the error. So there is no core PostgreSQL bug, I 
think.


PgAdmin has different result, when running the same sql commands 
repeatedly. Before filing a bug there, I should update to the actual 
release.


Now I will test our c++ code and will hopefully find out, why I can't 
run the dump from a sql-file (where is SET standard_conforming_strings = 
on;) as a pqxx-transaction...



--

Wolfgang Rißler
mailto: wolfgang.riss...@freenet.de
mobil: +49 1520 9191759
- may the source be with you -




Re: Table space not returned to the OS ?

2022-06-27 Thread Thomas Boussekey
Hello Florents,

Le lun. 27 juin 2022 à 11:30, Florents Tselai  a
écrit :

> Hi,
>
> A few months back (October) I had upgraded a Postgres instance from v12 —>
> 14.
>
> The database disk size under /var/lib/postgresql/12 was around 800GB+ back
> then.
> Note, that IIRC I had used hard-linking during the upgrade.
>
> In the database itself, lots of things have changed since.
> In fact, that database itself has been dropped at some point and restored
> from a backup.
>
> As I was running out of disk space, I started investigating and found out
> that
>
> /var/lib/postgresql/12/main/base/16385  —>  886GB+
> /var/lib/postgresql/14 —> 400GB
>
Can you check if your upgrade process has used hard links between the 2
folders, as explained here:
 https://dba.stackexchange.com/a/289007/98943

>
> The last modification date on that directory (../12/) appears to be around
> a month ago,
> When the table with relied 16385 was in fact dropped.
>
> Now, In my update scripts (I use this db as an OLAP) I occasionally run
> VACUUM.
>
> Is it weird that the 886GB space hasn’t been returned to the OS yet?
>
> What’s the safest way to return it to the OS manually?
>
> Thanks
>
>
>
>


Re: Table space not returned to the OS ?

2022-06-27 Thread Magnus Hagander
On Mon, Jun 27, 2022 at 11:30 AM Florents Tselai 
wrote:

> Hi,
>
> A few months back (October) I had upgraded a Postgres instance from v12 —>
> 14.
>
> The database disk size under /var/lib/postgresql/12 was around 800GB+ back
> then.
> Note, that IIRC I had used hard-linking during the upgrade.
>
> In the database itself, lots of things have changed since.
> In fact, that database itself has been dropped at some point and restored
> from a backup.
>
> As I was running out of disk space, I started investigating and found out
> that
>
> /var/lib/postgresql/12/main/base/16385  —>  886GB+
> /var/lib/postgresql/14 —> 400GB
>
> The last modification date on that directory (../12/) appears to be around
> a month ago,
> When the table with relied 16385 was in fact dropped.
>
> Now, In my update scripts (I use this db as an OLAP) I occasionally run
> VACUUM.
>
> Is it weird that the 886GB space hasn’t been returned to the OS yet?
>
> What’s the safest way to return it to the OS manually?
>
>
When you use hardlinks in the upgrade all the files remain in the old
directory when they are removed from the new one such as when you drop a
relation. it is there for emergency recoveries. It's only the contents of
the files that's "mirrored", not the existance.

It looks like you didn't actually delete the old cluster, which you are
supposed to do once you have verified that the new one works. This looks
like a debian/ubuntu system, which means you probably forgot to run
"pg_dropcluster 12 main"? Or if it's not a debian cluster, the equivalent
of that which results in removing the data directory for 12 along with any
configuration files it has elsewhere.

--
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: Differences in Escaped bytea's when creating a plain pg_dump

2022-06-27 Thread David G. Johnston
On Sunday, June 26, 2022, WR  wrote:

>
> I made some test with pgadmin. Pgadmin (5.2) also reports this error now.
> And it doesn't matter if standard_conforming_strings is on or off.
>
>
> SET standard_conforming_strings = off;
>
> INSERT INTO public.oned_figures VALUES (1, 'Figure_Wolle1', 476, -476,
> 2000, 2400, 2400, '\000', 500, 0, 'sinus(0|0|0;30;5;0;0,5;0)', '2021-08-31
> 11:53:22.442801', 0, 1);
>
> And
>
> SET standard_conforming_strings = on;
>
> INSERT INTO public.oned_figures VALUES (1, 'Figure_Wolle1', 476, -476,
> 2000, 2400, 2400, '\000', 500, 0, 'sinus(0|0|0;30;5;0;0,5;0)', '2021-08-31
> 11:53:22.442801', 0, 1);
>
>
> Both do report:
>
> WARNUNG:  nicht standardkonforme Verwendung von Escape in
> Zeichenkettenkonstante
> LINE 8: ...(1, 'Figure_Wolle1', 476, -476, 2000, 2400, 2400, '\000\000&...
>  ^
> HINT:  Verwenden Sie die Syntax für Escape-Zeichenketten, z.B. E'\r\n'.
>
> ERROR: FEHLER:  ungültige Byte-Sequenz für Kodierung »UTF8«: 0x00
>
>
I’m not in a position to test/experiment on any guaranteed timeframe but
your observation that the outcome of those two commands is independent of
value of standard_conforming_strings is either a straight up bug or you’ve
made a mistake somewhere.  The warning is fully conditioned on that setting
being off.

E’\\000’ and ‘\000’ passed as string literals to the bytea input routine
are defined to be equivalent inputs under standard_conforming_strings and
neither can produce the warning in that case.

I suggest doing self-contained examples that demonstrate the documented
behavior not working as documented (or not being functional even if
intended) to pinpoint any bug that might be lurking here.  With only
fragments and statements that seem impossible we are left to assume
operator error.  pg_dump is completely correct in what it is producing
(non-escape literal \000).

I also suggest using psql and pg_dump directly, and not pgAdmin, to
demonstrate a core PostgreSQL bug.

David J.


Re: Differences in Escaped bytea's when creating a plain pg_dump

2022-06-27 Thread WR

Another strange thing is:

in my first mail I wrote: running the dump in in pgadmin works, in the 
last mail I wrote pgadmin also produces the error. I played a little bit 
how this could be happend.


Everytime ich used the following sql text in the querytool:

SET standard_conforming_strings = on;

INSERT INTO public.oned_figures VALUES (1, 'Figure_Wolle1', 476, -476, 
2000, 2400, 2400, '\000', 500, 0, 'sinus(0|0|0;30;5;0;0,5;0)', 
'2021-08-31 11:53:22.442801', 0, 1);


After each run I deleted the line with a View/Edit Data Panel of the Table.

First run worked.
Second run worked.
Then I changed to SET standard_conforming_strings = off;
Third run worked.
Fourth run throw the error
Then I changed back to  SET standard_conforming_strings = on;
Fifth run throw the error too.
And only adding E and second backslash helped.

I could reproduce this behavior everytime I close the query tool and 
opened it again.


But this looks more like a pgadmin-bug.



--
May the source be with you




Sv: How can I set up Postgres to use given amount of RAM?

2022-06-27 Thread Andreas Joseph Krogh


På søndag 26. juni 2022 kl. 20:40:01, skrev W.P. mailto:laure...@wp.pl>>:
Question in topic:

"How can I set up Postgres to use given amount of RAM?"

I have now laptop with 8GB of RAM, i can see Linux uses no more than 2-3GB.

So my question is how to FORCE PostgreSQL

use let's say 2-4Ghow to B of RAM for caching tables I run queries on?

As I can see disk actity running queries.


W.P..
The closest thing I can think of is effective_cache_size: 
https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-EFFECTIVE-CACHE-SIZE
 









--
Andreas Joseph Krogh


Re: Differences in Escaped bytea's when creating a plain pg_dump

2022-06-27 Thread WR

I'm back at my problem today:




Example:


postgres=# set standard_conforming_strings to off;
SET


postgres=# set escape_string_warning to off;
SET


postgres=# select '\000'::bytea;
ERROR:  invalid byte sequence for encoding "UTF8": 0x00


postgres=# select '\\000'::bytea;
 bytea 
---

 \x00
(1 row)


I made some test with pgadmin. Pgadmin (5.2) also reports this error 
now. And it doesn't matter if standard_conforming_strings is on or off.



SET standard_conforming_strings = off;

INSERT INTO public.oned_figures VALUES (1, 'Figure_Wolle1', 476, -476, 
2000, 2400, 2400, '\000', 500, 0, 'sinus(0|0|0;30;5;0;0,5;0)', 
'2021-08-31 11:53:22.442801', 0, 1);


And

SET standard_conforming_strings = on;

INSERT INTO public.oned_figures VALUES (1, 'Figure_Wolle1', 476, -476, 
2000, 2400, 2400, '\000', 500, 0, 'sinus(0|0|0;30;5;0;0,5;0)', 
'2021-08-31 11:53:22.442801', 0, 1);



Both do report:

WARNUNG:  nicht standardkonforme Verwendung von Escape in 
Zeichenkettenkonstante

LINE 8: ...(1, 'Figure_Wolle1', 476, -476, 2000, 2400, 2400, '\000\000&...
 ^
HINT:  Verwenden Sie die Syntax für Escape-Zeichenketten, z.B. E'\r\n'.

ERROR: FEHLER:  ungültige Byte-Sequenz für Kodierung »UTF8«: 0x00


The warning can be avoided by adding the E before the string constant.

The only solution to avoid the error is, to double-backslash.

INSERT INTO public.oned_figures VALUES (1, 'Figure_Wolle1', 476, -476, 
2000, 2400, 2400, E'\\000', 500, 0, 'sinus(0|0|0;30;5;0;0,5;0)', 
'2021-08-31 11:53:22.442801', 0, 1);


I also  dumped the table again after INSERT, and the data was correct 
(but again without E and with single backslash)


What I cant understand: why does pg_dump produce the string without the 
E and without double-backslash, when it is needed? Now I have to write a 
correction routine in c++, what fixes the dumps, before using them.



--
May the source be with you