Re: Migrating from Oracle - Implicit Casting Issue

2022-07-19 Thread Karthik K L V
Hi David,

Thanks for the quick response.
Making sure I got it right - U mean Postgres DB Server when you say server
right?
IIUC, by configuring this property, the driver will not set the type and
leave it to the Postgres DB Server to map it to the appropriate type. Will
this have any performance implication on the Postgres server that we need
to worry about?

Thanks,
Karthik K L V

On Tue, Jul 19, 2022 at 12:12 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Monday, July 18, 2022, Karthik K L V 
> wrote:
>
>> Hi Team,
>>
>> We are migrating from Oracle 12c to Aurora Postgres 13 and running into
>> implicit casting issues.
>>
>> Oracle is able to implicitly cast the bind value of prepared statements
>> executed from the application to appropriate type - String -> Number,
>> String -> Date, Number -> String etc. when there is a mismatch b/w java
>> data type and the column Datatype.
>>
>> For example: If the Datatype of a Column is defined as Number and the
>> application sends the bind value as a String (with single quotes in the
>> query) - Oracle DB is able to implicitly cast to Number and execute the
>> query and return the results.
>>
>> The same is not true with Postgres and we are getting below exception
>>
>> *org.postgresql.util.PSQLException: ERROR: operator does not exist:
>> bigint = character varying*
>> *Hint: No operator matches the given name and argument types. You might
>> need to add explicit type casts..*
>>
>> We found a Postgres Driver property - stringtype=unspecified which
>> appears to solve this problem and have the following questions.
>> https://jdbc.postgresql.org/documentation/83/connect.html
>>
>> Could you please let us know the following?
>>
>> Q1) Will configuring this stringtype property introduce overhead on
>> Postgres leading to Performance issues
>> Q2)Does setting this attribute have any other implications on the data in
>> the DB.
>> Q3)Is there any plan to deprecate / stop supporting this attribute in
>> future Aurora Postgres releases.
>>
>>
> That setting is not recognized by the server in any way, it is a driver
> concern only.  IIUC it makes the Java Driver behave in a way consistent
> with the expectations of the server since by leaving the supplied type info
> undeclared the server can use its own logic.  If it works for you I say use
> it, it will be less problematic than methodically fixing your queries up
> front.  Though if there are some that show to be bottlenecks getting the
> type info correct may prove to make a difference in some situations.
>
> David J.
>
>

-- 
Karthik klv


Batch process

2022-07-19 Thread Rama Krishnan
Hi All,

I am doing purge activity my sales table contains 5M records I am going to
delete more than 1 year data (which was 3M) records so it was running more
so I want to do batch wise deletion through plsql




 created or replace function data_purge() returns void as$$
Declare
Begin
Drop table test_old;
Create table test_old as select * from sales where bill_date

Re: postgis

2022-07-19 Thread Marc Millas
Postgres installed, but not postgis.. which is why I need some help...

Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com



On Tue, Jul 19, 2022 at 11:43 PM Adrian Klaver 
wrote:

> On 7/19/22 2:09 PM, Marc Millas wrote:
> > I did run each step of the script and did install a postgres 12.11.
> > then destroyed the instance created by the script, and, then
> > pg_createcluster a new one, which is running fine.
>
> Does this mean you have PostGIS installed now?
>
> > Marc MILLAS
> > Senior Architect
> > +33607850334
> > www.mokadb.com 
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: citext on exclude using gist

2022-07-19 Thread David G. Johnston
On Tue, Jul 19, 2022 at 4:43 PM Tom Lane  wrote:

> Jean Carlo Giambastiani Lopes  writes:
> > I'm trying to use a citext column in the following manner without
> success:
>
> > create extension btree_gist;
> > create extension citext;
> > create table my_table(
> > foo citext,
> > bar numrange,
> > primary key (foo, bar),
> > exclude using gist (foo with =, bar with &&)
> > );
>
> > is this possible? If so, what's wrong on this snippet?
>
> btree_gist knows nothing of citext, so it's not providing any suitable
> operator class.
>
> (Code-wise it probably wouldn't be that hard to add, but I see no good way
> to deal with the inter-extension connection other than to give btree_gist
> a hard dependency on citext, which people wouldn't appreciate too much.)
>
>
I'd expect that creating a "btree_gist_citext" extension would provide a
pathway forward here.  IIUC, it could even depend on the two extensions it
is expanding upon.

David J.


Re: postgis

2022-07-19 Thread Imre Samu
> from your message, I understand that for debian 11, I can NOT get any
3.0.x version.

I can't see any Postgis 3.0.x version
in the default *"http://apt.postgresql.org/pub/repos/apt/
 bullseye-pgdg main 12"*  repo


*root@6b22a4450d93:/# cat /etc/apt/sources.list.d/pgdg.listdeb [
signed-by=/usr/local/share/keyrings/postgres.gpg.asc ]
http://apt.postgresql.org/pub/repos/apt/
 bullseye-pgdg main 12*

If the version of PostGIS(3.0.x) is important for compatibility,
you should also pay attention to the version of the "geos" and "proj"
packages.

As I understand the debian *"**postgresql-12-postgis-3"* package is
continuously updated to the latest official postgis version.
( postgis 3.0 -> postgis 3.1 -> postgis 3.2  ... )
And the latest update was at 2022-02-14
https://www.postgresql.org/message-id/e1njy2b-0006m6...@atalia.postgresql.org


( My knowledge is limited .. )
and there is a dedicated Postgis mail list:
-  https://lists.osgeo.org/mailman/listinfo/postgis-users
and a Debian packaging mail list:
- https://www.postgresql.org/list/pgsql-pkg-debian/

Regards,
 Imre



Marc Millas  ezt írta (időpont: 2022. júl. 19., K,
20:20):

> Hi,
>
> from your message, I understand that for debian 11, I can NOT get any
> 3.0.x version.
> right ?
>
> Marc MILLAS
> Senior Architect
> +33607850334
> www.mokadb.com
>
>
>
> On Tue, Jul 19, 2022 at 7:20 PM Imre Samu  wrote:
>
>> > I would like to install postgis 3.04 on a debian 11
>> > postgres 12.
>> > I may use whatever repo.
>> > I install postgres from postgresql.org...
>>
>> As I see - from the official postgresql.org debian11 repo,
>> you can only install the "postgresql-12-postgis-3" package  ( now:
>> Postgis=3.2.1 )
>>
>> docker run -it --rm postgres:12 bash
>>   apt update && apt search postgis | grep postgresql-12-postgis-3
>>---> *postgresql-12-postgis-3/bullseye-pgdg 3.2.1+dfsg-1.pgdg110+1
>> amd64*
>>   cat /etc/os-release | grep VERSION
>>---> VERSION="11 (bullseye)
>>
>> Now the latest 3.0 version is
>> http://postgis.net/2022/02/02/postgis-3.0.5/
>>
>> regards,
>>   Imre
>>
>> Marc Millas  ezt írta (időpont: 2022. júl. 18.,
>> H, 20:48):
>>
>>> Hi,
>>> postgres 12.
>>> I may use whatever repo.
>>>
>>> I install postgres from postgresql.org...
>>>
>>>
>>>
>>> Marc MILLAS
>>> Senior Architect
>>> +33607850334
>>> www.mokadb.com
>>>
>>>
>>>
>>> On Mon, Jul 18, 2022 at 7:15 PM Adrian Klaver 
>>> wrote:
>>>
 On 7/18/22 10:08, Marc Millas wrote:
 > Hi,
 >
 > I would like to install postgis 3.04 on a debian 11.
 >
 > digging into various web sites, I didnt found the name of that packet.
 >
 > can someone help ?

 Should have added to previous response, what version of Postgres?
 >
 > thanks
 >
 > Marc MILLAS
 > Senior Architect
 > +33607850334
 > www.mokadb.com 
 >


 --
 Adrian Klaver
 adrian.kla...@aklaver.com

>>>


Re: citext on exclude using gist

2022-07-19 Thread Tom Lane
Jean Carlo Giambastiani Lopes  writes:
> I'm trying to use a citext column in the following manner without success:

> create extension btree_gist;
> create extension citext;
> create table my_table(
> foo citext,
> bar numrange,
> primary key (foo, bar),
> exclude using gist (foo with =, bar with &&)
> );

> is this possible? If so, what's wrong on this snippet?

btree_gist knows nothing of citext, so it's not providing any suitable
operator class.

(Code-wise it probably wouldn't be that hard to add, but I see no good way
to deal with the inter-extension connection other than to give btree_gist
a hard dependency on citext, which people wouldn't appreciate too much.)

You could fake it with

exclude using gist (lower(foo) with =, bar with &&)

which is surely conceptually ugly, but I think it gives compatible
semantics.

regards, tom lane




citext on exclude using gist

2022-07-19 Thread Jean Carlo Giambastiani Lopes
Hi,

I'm trying to use a citext column in the following manner without success:

create extension btree_gist;
create extension citext;
create table my_table(
foo citext,
bar numrange,
primary key (foo, bar),
exclude using gist (foo with =, bar with &&)
);

is this possible? If so, what's wrong on this snippet?


Regards,
Jean Lopes


Re: Concurrent INSERT statements with RETURNING clause resetting SERIAL sequence

2022-07-19 Thread Gavan Schneider

On 20 Jul 2022, at 4:08, Francisco Olarte wrote:


As a remark, in Spain bill numbers need to be gapless increasing. I
have done it with a sequence ( with cache 1, so I doubt it is much
more performant than a single row table, as it has a backing element
which needs to be saved ), and just used another process which swept
the values and filled the gap a posteriori ( there is a way to fill a
gap, has it legal uses, you can use it for that as long as you do not
have too many, basically insert cancelled bills in the gaps ). I
probably would do it with the one-row table now for simplicity, I
doubt I gained enough performance to justify the 20-something perl
lines plus the crontab entry to use sequences. As beard turns grey I
apreciate more the simpler ways to do things.

One answer to this problem has been around for a while, and my version 
is shown below. No extra table is needed, just a windowed look at the 
table where the reference is used. My usage is at human speed so 
performance is not an issue but it should be pretty good if there are 
minimal holes. What is not addressed is that a reference can be reissued 
upto the time the calling process commits an entry in the table and 
takes that reference out of circulation. There are different approaches 
to handling/preventing such collisions.


CREATE OR REPLACE FUNCTION accounts.next_active_reference()
RETURNS integer
LANGUAGE 'sql'
VOLATILE  LEAKPROOF STRICT
PARALLEL UNSAFE
COST 3000 -- pure guesstimate
AS $BODY$
SELECT L.transaction_ref + 1 AS start
FROM accounts.transaction_refs AS L
LEFT OUTER JOIN accounts.transaction_refs AS R
ON L.transaction_ref + 1 = R.transaction_ref
WHERE R.transaction_ref IS NULL
		AND L.transaction_ref >700 -- historic reasons only, added to existing 
system

ORDER BY L.transaction_ref
LIMIT 1;
$BODY$;

COMMENT ON FUNCTION accounts.next_active_reference() IS
$COMMENT$
	Determines the next available reference number, making sure to fill any 
holes.
	The holes in the sequence prior to 700 are ignored (pure history), and 
this code
	will make sure any out of sequence data blocks will not cause 
conflicts.

Credits:
	Ref: 

	2022 update: this link is now dead, only reporting "There is nothing 
here".

$COMMENT$;

Regards

Gavan Schneider
——
Gavan Schneider, Sodwalls, NSW, Australia
Explanations exist; they have existed for all time; there is always a 
well-known solution to every human problem — neat, plausible, and 
wrong.

— H. L. Mencken, 1920


Re: postgis

2022-07-19 Thread Adrian Klaver

On 7/19/22 2:09 PM, Marc Millas wrote:

I did run each step of the script and did install a postgres 12.11.
then destroyed the instance created by the script, and, then 
pg_createcluster a new one, which is running fine.


Does this mean you have PostGIS installed now?


Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com 



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: postgis

2022-07-19 Thread Marc Millas
I did run each step of the script and did install a postgres 12.11.
then destroyed the instance created by the script, and, then
pg_createcluster a new one, which is running fine.
Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com



On Tue, Jul 19, 2022 at 8:42 PM Adrian Klaver 
wrote:

> On 7/19/22 11:19, Marc Millas wrote:
> > It is. I do begin with the postgres site script:
> >
> > sudo sh -c 'echo "debhttp://apt.postgresql.org/pub/repos/apt  <
> http://apt.postgresql.org/pub/repos/apt>  $(lsb_release -cs)-pgdg main" >
> /etc/apt/sources.list.d/pgdg.list'
> >
> >
> > so...
>
> Have you run?:
>
> sudo apt update
>
>
> >
> >
> > Marc MILLAS
> > Senior Architect
> > +33607850334
> > www.mokadb.com 
> >
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: postgis

2022-07-19 Thread Adrian Klaver

On 7/19/22 11:19, Marc Millas wrote:

It is. I do begin with the postgres site script:

sudo sh -c 'echo "debhttp://apt.postgresql.org/pub/repos/apt  
  $(lsb_release -cs)-pgdg main" > 
/etc/apt/sources.list.d/pgdg.list'


so...


Have you run?:

sudo apt update





Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com 





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: postgis

2022-07-19 Thread Marc Millas
Hi,

from your message, I understand that for debian 11, I can NOT get any 3.0.x
version.
right ?

Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com



On Tue, Jul 19, 2022 at 7:20 PM Imre Samu  wrote:

> > I would like to install postgis 3.04 on a debian 11
> > postgres 12.
> > I may use whatever repo.
> > I install postgres from postgresql.org...
>
> As I see - from the official postgresql.org debian11 repo,
> you can only install the "postgresql-12-postgis-3" package  ( now:
> Postgis=3.2.1 )
>
> docker run -it --rm postgres:12 bash
>   apt update && apt search postgis | grep postgresql-12-postgis-3
>---> *postgresql-12-postgis-3/bullseye-pgdg 3.2.1+dfsg-1.pgdg110+1
> amd64*
>   cat /etc/os-release | grep VERSION
>---> VERSION="11 (bullseye)
>
> Now the latest 3.0 version is http://postgis.net/2022/02/02/postgis-3.0.5/
>
> regards,
>   Imre
>
> Marc Millas  ezt írta (időpont: 2022. júl. 18.,
> H, 20:48):
>
>> Hi,
>> postgres 12.
>> I may use whatever repo.
>>
>> I install postgres from postgresql.org...
>>
>>
>>
>> Marc MILLAS
>> Senior Architect
>> +33607850334
>> www.mokadb.com
>>
>>
>>
>> On Mon, Jul 18, 2022 at 7:15 PM Adrian Klaver 
>> wrote:
>>
>>> On 7/18/22 10:08, Marc Millas wrote:
>>> > Hi,
>>> >
>>> > I would like to install postgis 3.04 on a debian 11.
>>> >
>>> > digging into various web sites, I didnt found the name of that packet.
>>> >
>>> > can someone help ?
>>>
>>> Should have added to previous response, what version of Postgres?
>>> >
>>> > thanks
>>> >
>>> > Marc MILLAS
>>> > Senior Architect
>>> > +33607850334
>>> > www.mokadb.com 
>>> >
>>>
>>>
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>>>
>>


Re: postgis

2022-07-19 Thread Marc Millas
It is. I do begin with the postgres site script:

sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt
$(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'


so...


Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com



On Tue, Jul 19, 2022 at 6:54 PM Adrian Klaver 
wrote:

> On 7/19/22 9:51 AM, Marc Millas wrote:
> > just... nothing !
>
> I thought you said you used the Postgres community repo to install
> Postgres.
>
> Is that not the case?
>
>
> >
> > Marc MILLAS
> > Senior Architect
> > +33607850334
> > www.mokadb.com 
> >
> >
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Concurrent INSERT statements with RETURNING clause resetting SERIAL sequence

2022-07-19 Thread Francisco Olarte
On Tue, 19 Jul 2022 at 18:50, Sebastien Flaesch
 wrote:
>> If that's the behavior you want, you can build it out of standard SQL 
>> facilities (e.g. update a one-row table).
> Can you elaborate please?
> Do you mean the code should use an UPDATE on a one-row table to acquire a 
> lock?

That's the usual SQL way. You can, using the appropiate isolation
level, select the max() from the table using the IDs  ( but this will
probably lock it whole ), you can use a one-row table with a current
int column as a sequence, and set current=current+1, but this will
lock everyone inserting ( and need an extra table ). You can use a
name-current unique sequences table and update that ( it saves tables
but it can worsen locking, as many engines locks whole pages ). The
consecutive gapless sequence problem surfaces a lot, but there is no
good way to solve it without locking, as once you get the value
everybody else needing it has to wait until you decide whether to
commit on rollback.

Sequences are for when you need autogenerated keys. They use
consecutive integer blocks nearly everywhere because they are easy to
code, play well with btree indexes, correlate well with insertion
times, but they give up the "no hole" approach to improve performance.
They even can return values unordered in different sessions (
sequences acquire, and burn,  blocks in a locking way, when you need
high concurrency you make them return big blocks, but them values from
different sessions can be delivered out of order ).

If you need correlative numbers, byte the bullet and code it, trying
to do it with sequences will probably end with an approach which will
fail in some corner cases and be slower than just coding it with a
single row table ( just remember to check it, so you do not have the
fun which ensues in oracle when someone puts two rows in dual in the
old days ). All the queries you have posted are probably more complex
than a couple procedures to update and read a table, even using eval
to simulate named sequences if you want ( just remember to always
pre-increment before reading, there is a reason everybody does it that
way ).

As a remark, in Spain bill numbers need to be gapless increasing. I
have done it with a sequence ( with cache 1, so I doubt it is much
more performant than a single row table, as it has a backing element
which needs to be saved ), and just used another process which swept
the values and filled the gap a posteriori ( there is a way to fill a
gap, has it legal uses, you can use it for that as long as you do not
have too many, basically insert cancelled bills in the gaps ). I
probably would do it with the one-row table now for simplicity, I
doubt I gained enough performance to justify the 20-something perl
lines plus the crontab entry to use sequences. As beard turns grey I
apreciate more the simpler ways to do things.

Francisco Olarte.




Re: Out Of Memory

2022-07-19 Thread Adrian Klaver

On 7/19/22 10:54 AM, ghoostin...@mail.com wrote:

Again reply to list also.

Ccing list.


What should i do and what’s the requirements
———-


The link explains what you should do.

What requirements are you talking about?

This is probably something that the Odoo community will be better able 
to answer:


https://www.odoo.com/forum/help-1


--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Out Of Memory

2022-07-19 Thread Adrian Klaver

On 7/19/22 9:56 AM, ghoostin...@mail.com wrote:

Please reply to list also.
Ccing list


 > 1) Detail how you are doing migration
1 - I'm Using Tool OpenUpgrade OCA
2 - Creation new db and restore my db on it (36GB and Some table 5m+ 
rows)
3 - Execution Script Sql to unistall Unwanted Modules
4 - Switching to v12.0
-> gitrecupdate 12.0
5 - Execute Shell to Unistall unwanted Modules
-> echo "self.env['base.module.upgrade'].upgrade_module()" | 
./odoo/odoo-bin shell --config ./odoo12.conf -d bt --stop-after-init
6 - Switching to 13.0 & Execute OpenUpgrade
-> gitrecupdate 13.0
-> ./OCA/OpenUpgrade/odoo-bin --config ./openupgrade13.conf -d 
bt -u all --stop-after-init
7 - Switching to 14.0
-> gitrecupdate 14.0
-> ./odoo/odoo-bin --config ./odoo14.conf -d bt -u all 
--stop-after-init


I would read:

https://oca.github.io/OpenUpgrade/after_migration.html

I have to believe doing serial migrations with no clean up is going to 
eat up space where you have database of size 36GB and ram of 30GB and 
disk size of 100GB.





2) Are you doing this on the same machine?
- Yeah, i did it on AWS Server
3) OS and version
- Ubuntu 20.04.4 LTS


4) Hardware specifications for machine e.g. memory, disk space, etc.
- Amazon AWS
- Instance Type: r6gd.xlarge
- OS: Ubuntu
- RAM: 30GB, Disk Space: 100GB


Note: It works till Migration to 13.0 , Stock account Migration

—— - - -
On 19/07/2022 at 3:00 PM, Adrian Klaver wrote:


On 7/19/22 06:16, ghoostin...@mail.com wrote:

Hello, I’m using Odoo Erp v12 Community and i want migrate to v14 and i have 
tablea content more 5m rows and after 8hr of execution it sho msg “out of 
memory” so i need your help


And we need information:

1) Detail how you are doing migration.

2) Are you doing this on the same machine?

3) OS and version.

4) Hardware specifications for machine e.g. memory, disk space, etc.


--
Adrian Klaver
adrian.kla...@aklaver.com




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: pgsql 10.19 : "ERROR: cannot convert infinity to numeric" except there is no infinity

2022-07-19 Thread Adrian Klaver

On 7/19/22 10:42 AM, Adrian Klaver wrote:

On 7/19/22 10:32 AM, Adrian Klaver wrote:

On 7/19/22 10:26 AM, Achilleas Mantzios wrote:




Have you tried:

NULLIF(cept.value, 'inf')::numeric


That was a miss. I originally tested this on Postgres 14 and of course 
it worked. Trying it on Postgres 12 got:


select nullif(1.5, 'inf')::numeric;
ERROR:  invalid input syntax for type numeric: "inf"
LINE 1: select nullif(1.5, 'inf')::numeric;

Due to this:

https://www.postgresql.org/docs/current/functions-conditional.html#FUNCTIONS-NULLIF 



"The two arguments must be of comparable types. To be specific, they are 
compared exactly as if you had written value1 = value2, so there must be 
a suitable = operator available."


So:

1.5::numeric = 'inf'::numeric


The cheat would be:

select version();
   version 


-
 PostgreSQL 12.10 on x86_64-pc-linux-gnu, compiled by gcc (SUSE Linux) 
7.5.0, 64-bit


select nullif('inf'::float, 'inf')::numeric;
 nullif

   NULL

select nullif(1.5::float, 'inf')::numeric;
 nullif

1.5
(1 row)




--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt







--
Adrian Klaver
adrian.kla...@aklaver.com




Re: pgsql 10.19 : "ERROR: cannot convert infinity to numeric" except there is no infinity

2022-07-19 Thread Adrian Klaver

On 7/19/22 10:32 AM, Adrian Klaver wrote:

On 7/19/22 10:26 AM, Achilleas Mantzios wrote:




Have you tried:

NULLIF(cept.value, 'inf')::numeric


That was a miss. I originally tested this on Postgres 14 and of course 
it worked. Trying it on Postgres 12 got:


select nullif(1.5, 'inf')::numeric;
ERROR:  invalid input syntax for type numeric: "inf"
LINE 1: select nullif(1.5, 'inf')::numeric;

Due to this:

https://www.postgresql.org/docs/current/functions-conditional.html#FUNCTIONS-NULLIF

"The two arguments must be of comparable types. To be specific, they are 
compared exactly as if you had written value1 = value2, so there must be 
a suitable = operator available."


So:

1.5::numeric = 'inf'::numeric



--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: pgsql 10.19 : "ERROR: cannot convert infinity to numeric" except there is no infinity

2022-07-19 Thread Adrian Klaver

On 7/19/22 10:26 AM, Achilleas Mantzios wrote:

Thank you Adrian!


Actually thank:

https://sqlformat.darold.net/



Στις 19/7/22 18:36, ο/η Adrian Klaver έγραψε:

On 7/19/22 03:38, Achilleas Mantzios wrote:

I reformatted queries to see thing better.





AND numrange(ceptl.min_alarm::numeric, ceptl.max_alarm::numeric, '()') 
@> cept.value::numeric

ORDER BY
    1;

So the above fails. In your title when you say there is no infinity 
that means the cept.value, ceptl.min_alarm or ceptl.max_alarm  fields 
do not have any '-infinity' or 'infinity' values, correct?

There is infinity in cept.value , just not in this result set.
I got confused and wrongly assumed that since the result set (without 
the filter in the WHERE clause including cept.value::numeric) did not 
contain any infinity it should also work with the filter in the WHERE 
clause. Apparently a subplan executes this conversion in the WHERE 
before the other filters. I did not do any analyze to prove this.




Have you tried:

NULLIF(cept.value, 'inf')::numeric


--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt










--
Adrian Klaver
adrian.kla...@aklaver.com




Re: pgsql 10.19 : "ERROR: cannot convert infinity to numeric" except there is no infinity

2022-07-19 Thread David G. Johnston
On Tuesday, July 19, 2022, Achilleas Mantzios 
wrote:

> Thanks David
> Στις 19/7/22 17:19, ο/η David G. Johnston έγραψε:
>
> On Tuesday, July 19, 2022, Achilleas Mantzios <
> ach...@matrix.gatewaynet.com> wrote:
>
>>
>> ERROR:  cannot convert infinity to numeric
>>
>> -- has no problem testing against infinity
>>
>> select it.id ,cept.value::numeric as val, numrange(ceptl.min_alarm::nume
>> ric,ceptl.max_alarm::numeric
>> ,'()') as range from items it, cept_report cept , dynacom.vessels vsl,
>> machdefs md, cept_reportlimits ceptl wh
>> ere it.id=cept.id AND md.defid=ceptl.defid AND it.defid=md.defid AND
>> it.vslwhid=vsl.id AND vsl.vslstatus='Acti
>> ve' and md.application = 'Critical Equipment Performance Test' AND
>> cept.systemdate>= (now()-'1 year'::interval
>> ) AND  cept.value='inf' ORDER BY 1;
>> id | val | range
>> +-+---
>> (0 rows)
>>
>>
> The column cept.value contains an infinity.  I see nothing unusual in any
> of these queries given that fact.  If you try to cast the infinity to
> numeric it will fail.  If that doesn’t happen the query won’t fail.
>
> Sorry I must have been dizzy today with so much support.
>
> Yep, there are some infinity in there, but not in this result set.
>
> I think when the casting is in the WHERE filter for some reason some
> subplan uses this filter (and fails). But when this check is applied to the
> result, no infinity is found and works correctly.
>

That is what it means for SQL to be a declarative language, the order of
execution/evaluation is determined to be efficient and not what is
explicitly written.  You do have some control though, but using it also
means you might make things worse.

I think you have issues anyway if you are doing equality checks on what
seems to be a floating point column, regardless of which way you do the
cast.

David J.


Re: pgsql 10.19 : "ERROR: cannot convert infinity to numeric" except there is no infinity

2022-07-19 Thread Achilleas Mantzios

Thank you Adrian!

Στις 19/7/22 18:36, ο/η Adrian Klaver έγραψε:

On 7/19/22 03:38, Achilleas Mantzios wrote:

I reformatted queries to see thing better.



-- still has problem testing the range

select it.id ,cept.value::numeric as val, 
numrange(ceptl.min_alarm::numeric,ceptl.max_alarm::numeric
,'()') as range from items it, cept_report cept , dynacom.vessels 
vsl, machdefs md, cept_reportlimits ceptl wh
ere it.id=cept.id AND md.defid=ceptl.defid AND it.defid=md.defid AND 
it.vslwhid=vsl.id AND vsl.vslstatus='Acti
ve' and md.application = 'Critical Equipment Performance Test' AND 
cept.systemdate>= (now()-'1 year'::interval
) AND 
 numrange(ceptl.min_alarm::numeric,ceptl.max_alarm::numeric,'()') @> 
cept.value::numeric ORDER BY 1;

ERROR:  cannot convert infinity to numeric


SELECT
    it.id,
    cept.value::numeric AS val,
    numrange(ceptl.min_alarm::numeric, ceptl.max_alarm::numeric, '()') 
AS RANGE

FROM
    items it,
    cept_report cept,
    dynacom.vessels vsl,
    machdefs md,
    cept_reportlimits ceptl wh ere it.id = cept.id
    AND md.defid = ceptl.defid
    AND it.defid = md.defid
    AND it.vslwhid = vsl.id
    AND vsl.vslstatus = 'Acti
ve' and md.application = 'Critical Equipment Performance Test' AND
cept.systemdate>= (now()-'1 year'::interval)
AND numrange(ceptl.min_alarm::numeric, ceptl.max_alarm::numeric, '()') 
@> cept.value::numeric

ORDER BY
    1;

So the above fails. In your title when you say there is no infinity 
that means the cept.value, ceptl.min_alarm or ceptl.max_alarm  fields 
do not have any '-infinity' or 'infinity' values, correct?

There is infinity in cept.value , just not in this result set.
I got confused and wrongly assumed that since the result set (without 
the filter in the WHERE clause including cept.value::numeric) did not 
contain any infinity it should also work with the filter in the WHERE 
clause. Apparently a subplan executes this conversion in the WHERE 
before the other filters. I did not do any analyze to prove this.





-- no problem if the query goes into its barrier and the test done 
outside
with bar as (select it.id ,cept.value::numeric as val, 
numrange(ceptl.min_alarm::numeric,ceptl.max_a
larm::numeric,'()') as range from items it, cept_report cept , 
dynacom.vessels vsl, machdefs md, cept_reportli
mits ceptl where it.id=cept.id AND md.defid=ceptl.defid AND 
it.defid=md.defid AND it.vslwhid=vsl.id AND vsl.vs
lstatus='Active' and md.application = 'Critical Equipment Performance 
Test' AND cept.systemdate>= (now()-'1 ye

ar'::interval)  ORDER BY 1)
select * from bar where NOT range @> val;
-- good data here


WITH bar AS (
    SELECT
    it.id,
    cept.value::numeric AS val,
    numrange(ceptl.min_alarm::numeric, ceptl.max_alarm::numeric, 
'()') AS

RANGE
    FROM
    items it,
    cept_report cept,
    dynacom.vessels vsl,
    machdefs md,
    cept_reportli mits ceptl
    WHERE
    it.id = cept.id
    AND md.defid = ceptl.defid
    AND it.defid = md.defid
    AND it.vslwhid = vsl.id
    AND vsl.vs lstatus = 'Active'
    AND md.application = 'Critical Equipment Performance Test'
    AND cept.systemdate >= (now() - '1 ye
ar'::interval)
    ORDER BY
    1
)
SELECT
    *
FROM
    bar
WHERE
    NOT RANGE @> val;

This version succeeds, correct? So breaking the range construction 
into one step and the @> test into another works. Though I am not sure 
why  @> changed to NOT  @>?
Yes this succeeds. The correct is with the NOT, it does not change the 
behavior of the initial query.


--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt









Re: restore question

2022-07-19 Thread Ronald Haynes
Many thanks folks, the last message by Adrian gave me the relatively simple 
prescription.

Sincerely,

Dr. Ronald D. Haynes
Professor, Department of Mathematics and Statistics
Chair, MSc and Phd Scientific Computing Programs
Memorial University of Newfoundland

We acknowledge that the lands on which Memorial University’s campuses are 
situated are in the traditional territories of diverse Indigenous groups, and 
we acknowledge with respect the diverse histories and cultures of the Beothuk, 
Mi’kmaq, Innu, and Inuit of this province.
On Jul 19, 2022, 2:22 PM -0230, Adrian Klaver , 
wrote:
> On 7/19/22 9:44 AM, Ronald Haynes wrote:
> > Thanks Ray, running
> >
> > pSql -f backup-file.sql
> >
> >
> >
> > psql: error: FATAL: database "rhaynes74" does not exist
>
> No that is expected as you did not specify a database to connect to
> using -d . In that case psql uses the OS user name you are
> running the command as for the database name per:
>
> https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS
>
> dbname
>
> The database name. Defaults to be the same as the user name. In
> certain contexts, the value is checked for extended formats; see Section
> 34.1.1 for more details on those.
>
> So change the command to:
>
> psql -d postgres -U  -f backup-file.sql
>
> Where  is a database user that has the correct privileges to
> load/create the databases.
>
> >
> > Which seems odd since rhaynes74 is a user not a database name in the file.
> >
> > Sincerely,
> >
> > Dr. Ronald D. Haynes
> > Professor, Department of Mathematics and Statistics
> > Chair, MSc and Phd Scientific Computing Programs
> > Memorial University of Newfoundland
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com


Re: postgis

2022-07-19 Thread Imre Samu
> I would like to install postgis 3.04 on a debian 11
> postgres 12.
> I may use whatever repo.
> I install postgres from postgresql.org...

As I see - from the official postgresql.org debian11 repo,
you can only install the "postgresql-12-postgis-3" package  ( now:
Postgis=3.2.1 )

docker run -it --rm postgres:12 bash
  apt update && apt search postgis | grep postgresql-12-postgis-3
   ---> *postgresql-12-postgis-3/bullseye-pgdg 3.2.1+dfsg-1.pgdg110+1 amd64*
  cat /etc/os-release | grep VERSION
   ---> VERSION="11 (bullseye)

Now the latest 3.0 version is http://postgis.net/2022/02/02/postgis-3.0.5/

regards,
  Imre

Marc Millas  ezt írta (időpont: 2022. júl. 18., H,
20:48):

> Hi,
> postgres 12.
> I may use whatever repo.
>
> I install postgres from postgresql.org...
>
>
>
> Marc MILLAS
> Senior Architect
> +33607850334
> www.mokadb.com
>
>
>
> On Mon, Jul 18, 2022 at 7:15 PM Adrian Klaver 
> wrote:
>
>> On 7/18/22 10:08, Marc Millas wrote:
>> > Hi,
>> >
>> > I would like to install postgis 3.04 on a debian 11.
>> >
>> > digging into various web sites, I didnt found the name of that packet.
>> >
>> > can someone help ?
>>
>> Should have added to previous response, what version of Postgres?
>> >
>> > thanks
>> >
>> > Marc MILLAS
>> > Senior Architect
>> > +33607850334
>> > www.mokadb.com 
>> >
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>


Re: pgsql 10.19 : "ERROR: cannot convert infinity to numeric" except there is no infinity

2022-07-19 Thread Achilleas Mantzios



Στις 19/7/22 17:23, ο/η Tom Lane έγραψε:

"David G. Johnston"  writes:

On Tuesday, July 19, 2022, Achilleas Mantzios 
wrote:

ERROR:  cannot convert infinity to numeric

The column cept.value contains an infinity.  I see nothing unusual in any
of these queries given that fact.  If you try to cast the infinity to
numeric it will fail.  If that doesn’t happen the query won’t fail.

FWIW, PG 14 and later do support infinity in the numeric type.
Yes I noticed that, thank you Tom, I hope we'll be able to upgrade in 
the near future.


regards, tom lane







Re: pgsql 10.19 : "ERROR: cannot convert infinity to numeric" except there is no infinity

2022-07-19 Thread Achilleas Mantzios

Thanks David

Στις 19/7/22 17:19, ο/η David G. Johnston έγραψε:
On Tuesday, July 19, 2022, Achilleas Mantzios 
 wrote:



ERROR:  cannot convert infinity to numeric

-- has no problem testing against infinity

select it.id  ,cept.value::numeric as val,
numrange(ceptl.min_alarm::numeric,ceptl.max_alarm::numeric
,'()') as range from items it, cept_report cept , dynacom.vessels
vsl, machdefs md, cept_reportlimits ceptl wh
ere it.id =cept.id  AND
md.defid=ceptl.defid AND it.defid=md.defid AND it.vslwhid=vsl.id
 AND vsl.vslstatus='Acti
ve' and md.application = 'Critical Equipment Performance Test' AND
cept.systemdate>= (now()-'1 year'::interval
) AND  cept.value='inf' ORDER BY 1;
id | val | range
+-+---
(0 rows)


The column cept.value contains an infinity.  I see nothing unusual in 
any of these queries given that fact.  If you try to cast the infinity 
to numeric it will fail.  If that doesn’t happen the query won’t fail.


Sorry I must have been dizzy today with so much support.

Yep, there are some infinity in there, but not in this result set.

I think when the casting is in the WHERE filter for some reason some 
subplan uses this filter (and fails). But when this check is applied to 
the result, no infinity is found and works correctly.




David J.


Re: postgis

2022-07-19 Thread Adrian Klaver

On 7/19/22 9:51 AM, Marc Millas wrote:

just... nothing !


I thought you said you used the Postgres community repo to install 
Postgres.


Is that not the case?




Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com 






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: restore question

2022-07-19 Thread Adrian Klaver

On 7/19/22 9:44 AM, Ronald Haynes wrote:

Thanks Ray, running

pSql -f backup-file.sql



psql: error: FATAL: database "rhaynes74" does not exist


No that is expected as you did not specify a database to connect to 
using -d . In that case psql uses the OS user name you are 
running the command as for the database name per:


https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS

dbname

The database name. Defaults to be the same as the user name. In 
certain contexts, the value is checked for extended formats; see Section 
34.1.1 for more details on those.


So change the command to:

psql -d postgres  -U  -f backup-file.sql

Where  is a database user that has the correct privileges to 
load/create the databases.




Which seems odd since rhaynes74 is a user not a database name in the file.

Sincerely,

Dr. Ronald D. Haynes
Professor, Department of Mathematics and Statistics
Chair, MSc and Phd Scientific Computing Programs
Memorial University of Newfoundland




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: postgis

2022-07-19 Thread Marc Millas
just... nothing !

Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com



On Tue, Jul 19, 2022 at 6:36 PM Adrian Klaver 
wrote:

> On 7/19/22 9:01 AM, Marc Millas wrote:
> > Thanks Adrian
> >
> > still, I see a bunch of files, and nothing that can be installed via
> > apt-get install.
> > The only "things" big enough to contain a postgis distrib, like
> >
> https://apt.postgresql.org/pub/repos/apt/pool/main/p/postgis/postgis_3.0.3%2Bdfsg.orig.tar.xz
> > <
> https://apt.postgresql.org/pub/repos/apt/pool/main/p/postgis/postgis_3.0.3%2Bdfsg.orig.tar.xz
> >
> >
> > can obviously not be installed via apt.
> >
>
> > what am I missing ??
>
> What does:
>
> apt-cache search postgresql-12-postgis
>
> return?
>
> >
> >
> > Marc MILLAS
> > Senior Architect
> > +33607850334
> > www.mokadb.com 
> >
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: restore question

2022-07-19 Thread Adrian Klaver

On 7/19/22 9:49 AM, Benedict Holland wrote:

You have to create the database to restore to. Then pg_resrore should work.


The OP is not using pg_restore but psql as the dump file is plain text.


Thanks,
Ben


--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Concurrent INSERT statements with RETURNING clause resetting SERIAL sequence

2022-07-19 Thread Sebastien Flaesch
Tom,
If that's the behavior you want, you can build it out of standard SQL 
facilities (e.g. update a one-row table).
Can you elaborate please?

Do you mean the code should use an UPDATE on a one-row table to acquire a lock?

Seb


Re: restore question

2022-07-19 Thread Benedict Holland
You have to create the database to restore to. Then pg_resrore should work.

Thanks,
Ben

On Tue, Jul 19, 2022, 12:44 PM Ronald Haynes  wrote:

> Thanks Ray, running
>
> pSql -f backup-file.sql
>
>
>
> psql: error: FATAL: database "rhaynes74" does not exist
>
> Which seems odd since rhaynes74 is a user not a database name in the file.
>
> Sincerely,
>
> Dr. Ronald D. Haynes
> Professor, Department of Mathematics and Statistics
> Chair, MSc and Phd Scientific Computing Programs
> Memorial University of Newfoundland
>
> *We acknowledge that the lands on which Memorial University’s campuses are
> situated are in the traditional territories of diverse Indigenous groups,
> and we acknowledge with respect the diverse histories and cultures of the
> Beothuk, Mi’kmaq, Innu, and Inuit of this province.*
> On Jul 19, 2022, 9:40 AM -0230, Ray O'Donnell , wrote:
>
> On 19/07/2022 11:11, Ronald Haynes wrote:
>
> Hi folks,  I am a fairly novice postgresql user.
>
>
>
> I have a backup file for a couple of small postgresql databases.The
> backup file was created using postgresql 12.2, my Mac now has a
> postgresql 13.x or 14.x versions.   I would like to restore the setup
> from the backup file.
>
>
> That looks like a dump file generated by pg_dumpall, so restoring it
> should be as simple as:
>
> psql -f backup-file.sql
>
> You'll possibly need other connection parameters as required by psql,
> such as -U (the PostgreSQL user to use for connecting) or others.
>
> HTH,
>
> Ray.
>
>
> --
> Raymond O'Donnell // Galway // Ireland
> r...@rodonnell.ie
>
>


Re: restore question

2022-07-19 Thread Ronald Haynes
Thanks Ray, running

pSql -f backup-file.sql



psql: error: FATAL: database "rhaynes74" does not exist

Which seems odd since rhaynes74 is a user not a database name in the file.

Sincerely,

Dr. Ronald D. Haynes
Professor, Department of Mathematics and Statistics
Chair, MSc and Phd Scientific Computing Programs
Memorial University of Newfoundland

We acknowledge that the lands on which Memorial University’s campuses are 
situated are in the traditional territories of diverse Indigenous groups, and 
we acknowledge with respect the diverse histories and cultures of the Beothuk, 
Mi’kmaq, Innu, and Inuit of this province.
On Jul 19, 2022, 9:40 AM -0230, Ray O'Donnell , wrote:
> On 19/07/2022 11:11, Ronald Haynes wrote:
> > Hi folks,  I am a fairly novice postgresql user.
> >
> >
> >
> > I have a backup file for a couple of small postgresql databases.    The
> > backup file was created using postgresql 12.2, my Mac now has a
> > postgresql 13.x or 14.x versions.   I would like to restore the setup
> > from the backup file.
> >
>
> That looks like a dump file generated by pg_dumpall, so restoring it
> should be as simple as:
>
> psql -f backup-file.sql
>
> You'll possibly need other connection parameters as required by psql,
> such as -U (the PostgreSQL user to use for connecting) or others.
>
> HTH,
>
> Ray.
>
>
> --
> Raymond O'Donnell // Galway // Ireland
> r...@rodonnell.ie


Re: postgis

2022-07-19 Thread Adrian Klaver

On 7/19/22 9:01 AM, Marc Millas wrote:

Thanks Adrian

still, I see a bunch of files, and nothing that can be installed via 
apt-get install.

The only "things" big enough to contain a postgis distrib, like
https://apt.postgresql.org/pub/repos/apt/pool/main/p/postgis/postgis_3.0.3%2Bdfsg.orig.tar.xz 



can obviously not be installed via apt.




what am I missing ??


What does:

apt-cache search postgresql-12-postgis

return?




Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com 




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Getting the table ID

2022-07-19 Thread Igor Korot
Hi, guys,

On Tue, Jul 19, 2022 at 4:42 AM Walter Dörwald  wrote:
>
> On 19 Jul 2022, at 5:10, Igor Korot wrote:
>
> Hi, guys,
>
> In the database theory each table is identified as "schema_name"."table_name".
>
> When I tried to look at how to get the table id inside the PostgreSQL,
> I saw that I needed to look at the pg_class table.
>
> SELECT oid FROM pg_class WHERE relname = "table_name";
>
> However that query will give a non-unique table id (see the first sentence).
>
> So how do I get the table id based on the "schema_name.table_name"?
>
> There is a pg_namespace table - is this where the schema should come from?
> If yes - how?
> Looking at that table I don't see any unique fields...
> Or is this something that is hidden?
>
> In fact I'm trying to run following query:
>
> SELECT c.oid FROM pg_class c, pg_namespace nc WHERE nc.oid =
> c.relnamespace AND c.relname = ? AND nc.nspname = ?;
>
> from my ODBC based program, but it returns 0 rows on SQLFetch.
>
> I know PostgreSQL does not use '?' for query parameters
> but I thought that since its an ODBC everything should work.
>
> Nevertheless, all bindings were successful, but now rows are returned.
>
> Is this query correct?
>
> Thank you.
>
> That's more or less the same query that I am using:
>
> select
> r.oid as oid,
> n.nspname || '.' || r.relname as name
> from
> pg_catalog.pg_namespace n
> join
> pg_catalog.pg_class r on n.oid = r.relnamespace
> where
> (r.relkind = 'r') and
> (n.nspname not like 'pg_%') and
> (n.nspname != 'information_schema') and
> (n.nspname = 'email') and
> (r.relname = 'emailhistory')
>
> Maybe your problem has to to with uppercase/lowercase schema and/or table 
> names?

Below is my C++ code based on the ODBC library:

[code]
SQLHSTMT stmt = 0;
SQLHDBC hdbc;
SQLLEN cbName, cbTableName = SQL_NTS, cbSchemaName = SQL_NTS;
long id;
int result = 0;
std::wstring query;
SQLWCHAR *qry = NULL, *tname = NULL, *sname = NULL;;
query = L"SELECT c.oid FROM pg_class c, pg_namespace nc WHERE nc.oid =
c.relnamespace AND c.relname = ? AND nc.nspname = ?;";
qry = new SQLWCHAR[query.length() + 2];
tname = new SQLWCHAR[tableName.length() + 2];
sname = new SQLWCHAR[schemaName.length() + 2];
memset( tname, '\0', tableName.length() + 2 );
memset( sname, '\0', schemaName.length() + 2);
uc_to_str_cpy( sname, schemaName );
uc_to_str_cpy( tname, tableName );
memset( qry, '\0', query.length() + 2 );
uc_to_str_cpy( qry, query );
SQLRETURN retcode = SQLAllocHandle( SQL_HANDLE_DBC, m_env,  );
if( retcode != SQL_SUCCESS && retcode != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 0 );
result = 1;
}
else
{
SQLSMALLINT OutConnStrLen;
retcode = SQLDriverConnect( hdbc, NULL, m_connectString, SQL_NTS,
NULL, 0, , SQL_DRIVER_NOPROMPT );
if( retcode != SQL_SUCCESS && retcode != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 2, hdbc );
result = 1;
}
else
{
auto dbName = new SQLWCHAR[pimpl->m_dbName.length() + 2];
memset( dbName, '\0', pimpl->m_dbName.length() + 2 );
uc_to_str_cpy( dbName, pimpl->m_dbName );
retcode = SQLSetConnectAttr( hdbc, SQL_ATTR_CURRENT_CATALOG, dbName, SQL_NTS );
delete[] dbName;
dbName = nullptr;
if( retcode != SQL_SUCCESS && retcode != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 2 );
result = 1;
retcode = SQLEndTran( SQL_HANDLE_DBC, m_hdbc, SQL_ROLLBACK );
}
else
{
retcode = SQLAllocHandle( SQL_HANDLE_STMT, hdbc,  );
if( retcode != SQL_SUCCESS && retcode != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 2, hdbc );
result = 1;
}
else
{
retcode = SQLPrepare( stmt, qry, SQL_NTS );
if( retcode != SQL_SUCCESS && retcode != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 1, stmt );
result = 1;
}
else
{
SQLSMALLINT dataType[2], decimalDigit[2], nullable[2];
SQLULEN parameterSize[2];
retcode = SQLDescribeParam( stmt, 1, [0], [0],
[0], [0] );
if( retcode != SQL_SUCCESS && retcode != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 1, stmt );
result = 1;
}
else
{
retcode = SQLBindParameter( stmt, 1, SQL_PARAM_INPUT, SQL_C_DEFAULT,
dataType[0], parameterSize[0], decimalDigit[0], tname, 0, 
);
if( retcode != SQL_SUCCESS && retcode != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 1, stmt );
result = 1;
}
}
if( !result )
{
retcode = SQLDescribeParam( stmt, 2, [1], [1],
[1], [1] );
if( retcode != SQL_SUCCESS && retcode != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 1, stmt );
result = 1;
}
else
{
retcode = SQLBindParameter( stmt, 2, SQL_PARAM_INPUT, SQL_C_DEFAULT,
dataType[1], parameterSize[1], decimalDigit[1], sname, 0,
 );
if( retcode != SQL_SUCCESS && retcode != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 1, stmt );
result = 1;
}
}
}
if( !result )
{
retcode = SQLExecute( stmt );
if( retcode != SQL_SUCCESS && retcode != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 1, stmt );
result = 1;
}
else
{
retcode = SQLBindCol( stmt, 1, SQL_C_SLONG, , 100,  );
if( retcode != SQL_SUCCESS && retcode != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 1, stmt );
result = 1;
}
else
{
retcode 

Re: About limitation of using postgresql in china

2022-07-19 Thread jian he
On Tue, Jul 19, 2022 at 2:07 PM pig lee  wrote:

> Dear postgresql member
>
>
>
> I will use postgresql in china for a project.
>
> But I’m not sure Whether there are some limitations when using
> postgresql in china(limitation for China only).
>
> For example,license limitation in china or other limitation when used.
>
>
> Can you tell me the postgresql limitations when used for china only.
>
>
>
> Look forward to your Email.
>
> Thank you very much.
>
>
>
> Best Regards
>
> Pengkun
>

Hi,

postgresql license: https://www.postgresql.org/about/licence/

technical limitation: So far I tested
string chinese text to date example:
https://stackoverflow.com/questions/67597719/how-to-convert-02-3%E6%9C%88-21-to-date-21-03-02-00-psql
date to chinese text:
https://www.postgresql.org/docs/current/functions-formatting.html
order by chinese pinyin and stroke is both ok.

Obviously there are some trigger cases, you probably need to take care of,
like is  壹(U+58F9) the same as  一(U+4E00)?
Also, there is simplified Chinese and traditional Chinese issues.

I didn't test it, but I guess it would take more effort if you name your
SQL identifiers in Chinese.



-- 
 I recommend David Deutsch's <>

  Jian


Re: postgis

2022-07-19 Thread Marc Millas
Thanks Adrian

still, I see a bunch of files, and nothing that can be installed via
apt-get install.
The only "things" big enough to contain a postgis distrib, like
https://apt.postgresql.org/pub/repos/apt/pool/main/p/postgis/postgis_3.0.3%2Bdfsg.orig.tar.xz

can obviously not be installed via apt.

what am I missing ??


Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com



On Tue, Jul 19, 2022 at 12:09 AM Adrian Klaver 
wrote:

> On 7/18/22 11:48, Marc Millas wrote:
> > Hi,
> > postgres 12.
> > I may use whatever repo.
> >
> > I install postgres from postgresql.org...
>
> You pretty much need to install from the same repo for PostGIS.
> Otherwise you will most likely run into compatibility issues.
>
>  From here:
>
> https://apt.postgresql.org/pub/repos/apt/pool/main/p/postgis/
>
> I only see 3.0.1 and 3.0.3 as options.
>
> A more complete answer will need to come from one of the maintainers.
>
> >
> >
> >
> > Marc MILLAS
> > Senior Architect
> > +33607850334
> > www.mokadb.com 
> >
> >
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Concurrent INSERT statements with RETURNING clause resetting SERIAL sequence

2022-07-19 Thread Tom Lane
Sebastien Flaesch  writes:
> I try to update the underlying sequence of a SERIAL column, by using a 
> RETURNING clause in my INSERT statement, which is checking that the column 
> value is greater than the last_value of my sequence, and reset the sequence 
> with setval() if needed.

It's not too surprising that that doesn't work, if you're coding it
based on this assumption:

> The whole INSERT statement (including the code in the RETURNING clause), 
> should execute in a ATOMIC manner.

Sequence-related actions are always carried out immediately, they do
not participate in any atomicity guarantees about the calling transaction.
Without this, any sequence update would have to block all concurrent
uses of that sequence until they see whether the first update commits.

If that's the behavior you want, you can build it out of standard SQL
facilities (e.g. update a one-row table).  The point of sequence objects
is exactly to provide a feature with better concurrent performance,
at the cost of no rollback guarantees.

So, there's no bug here, and calling it one isn't going to change
anybody's mind about that.

regards, tom lane




Re: pgsql 10.19 : "ERROR: cannot convert infinity to numeric" except there is no infinity

2022-07-19 Thread Adrian Klaver

On 7/19/22 03:38, Achilleas Mantzios wrote:

I reformatted queries to see thing better.



-- still has problem testing the range

select it.id ,cept.value::numeric as val, 
numrange(ceptl.min_alarm::numeric,ceptl.max_alarm::numeric
,'()') as range from items it, cept_report cept , dynacom.vessels vsl, 
machdefs md, cept_reportlimits ceptl wh
ere it.id=cept.id AND md.defid=ceptl.defid AND it.defid=md.defid AND 
it.vslwhid=vsl.id AND vsl.vslstatus='Acti
ve' and md.application = 'Critical Equipment Performance Test' AND 
cept.systemdate>= (now()-'1 year'::interval
) AND  numrange(ceptl.min_alarm::numeric,ceptl.max_alarm::numeric,'()') 
@> cept.value::numeric ORDER BY 1;

ERROR:  cannot convert infinity to numeric


SELECT
it.id,
cept.value::numeric AS val,
numrange(ceptl.min_alarm::numeric, ceptl.max_alarm::numeric, '()') 
AS RANGE

FROM
items it,
cept_report cept,
dynacom.vessels vsl,
machdefs md,
cept_reportlimits ceptl wh ere it.id = cept.id
AND md.defid = ceptl.defid
AND it.defid = md.defid
AND it.vslwhid = vsl.id
AND vsl.vslstatus = 'Acti
ve' and md.application = 'Critical Equipment Performance Test' AND
cept.systemdate>= (now()-'1 year'::interval)
AND numrange(ceptl.min_alarm::numeric, ceptl.max_alarm::numeric, '()') 
@> cept.value::numeric

ORDER BY
1;

So the above fails. In your title when you say there is no infinity that 
means the cept.value, ceptl.min_alarm or ceptl.max_alarm  fields do not 
have any '-infinity' or 'infinity' values, correct?





-- no problem if the query goes into its barrier and the test done outside
with bar as (select it.id ,cept.value::numeric as val, 
numrange(ceptl.min_alarm::numeric,ceptl.max_a
larm::numeric,'()') as range from items it, cept_report cept , 
dynacom.vessels vsl, machdefs md, cept_reportli
mits ceptl where it.id=cept.id AND md.defid=ceptl.defid AND 
it.defid=md.defid AND it.vslwhid=vsl.id AND vsl.vs
lstatus='Active' and md.application = 'Critical Equipment Performance 
Test' AND cept.systemdate>= (now()-'1 ye

ar'::interval)  ORDER BY 1)
select * from bar where NOT range @> val;
-- good data here


WITH bar AS (
SELECT
it.id,
cept.value::numeric AS val,
numrange(ceptl.min_alarm::numeric, ceptl.max_alarm::numeric, 
'()') AS

RANGE
FROM
items it,
cept_report cept,
dynacom.vessels vsl,
machdefs md,
cept_reportli mits ceptl
WHERE
it.id = cept.id
AND md.defid = ceptl.defid
AND it.defid = md.defid
AND it.vslwhid = vsl.id
AND vsl.vs lstatus = 'Active'
AND md.application = 'Critical Equipment Performance Test'
AND cept.systemdate >= (now() - '1 ye
ar'::interval)
ORDER BY
1
)
SELECT
*
FROM
bar
WHERE
NOT RANGE @> val;

This version succeeds, correct? So breaking the range construction into 
one step and the @> test into another works. Though I am not sure why 
 @> changed to NOT  @>?


--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt




--
Adrian Klaver
adrian.kla...@aklaver.com




Concurrent INSERT statements with RETURNING clause resetting SERIAL sequence

2022-07-19 Thread Sebastien Flaesch
Hello!

I try to update the underlying sequence of a SERIAL column, by using a 
RETURNING clause in my INSERT statement, which is checking that the column 
value is greater than the last_value of my sequence, and reset the sequence 
with setval() if needed.

When running several client processes in concurrently, using this INSERT 
statement, I get duplicate primary key errors.

The SQL code of my INSERT looks like this:

insert into tab1 (name) VALUES ('aaa')
returning tab1.pkey,
  (select case
 when tab1.pkey>=(select last_value from 
public.tab1_pkey_seq)
  then setval('public.tab1_pkey_seq',tab1.pkey,true)
 else 0
   end)

To me something is wrong with the behavior of PostgreSQL:

The whole INSERT statement (including the code in the RETURNING clause), should 
execute in a ATOMIC manner.

Obviously if several processes execute that code in parallel, and if the select 
last_value / setval() get mixed up, we are in trouble...

Can something confirm this is a PostgreSQL bug, or help me to find the right 
code to avoid the issue?

To reproduce:

  1.  Create tab1 table and stored procedure for testing with insert-tab1.sql
  2.  Run several psql processes in parallel, calling the stored procedure: sh 
./start2.sh (Linux/bash)

Thanks!

Seb


inserts-tab1.sql
Description: inserts-tab1.sql


call-proc.sql
Description: call-proc.sql


start2.sh
Description: start2.sh


Re: How to handle failed COMMIT

2022-07-19 Thread Håvar Nøvik
> Correct, the client did not get confirmation of commit success so it must 
> operate as if it failed.

I mean that’s the point, the client can’t operate as if it failed. It must 
operate as the state is unknown. But maybe that’s the correct application 
behaviour, just that I haven’t thought this through previously.

/Håvar Nøvik

On Tue, Jul 19, 2022, at 16:12, David G. Johnston wrote:
> On Monday, July 18, 2022, Håvar Nøvik  wrote:
>> 
>> try
>>execute transactional sql
>> catch (commit failed)
>>if (data is not stored)
>>  // regard data as not stored
> 
> Correct, the client did not get confirmation of commit success so it must 
> operate as if it failed.
> 
> David J. 


Re: Setting up a server with previous day data

2022-07-19 Thread Adrian Klaver

On 7/19/22 00:22, Srinivasa T N wrote:

Hi All,
    I have a primary postgresql 12 server which is 
being continuously used for transaction processing.  For reporting 
purposes, I want to set up a secondary server which has got previous day 
data.  Everyday night, I want the data from primary to be shifted to 
secondary.  I can achieve this manually using pg_basebackup on primary 
and pg_restore on secondary.  Is there any other automated efficient way 
to achieve the same?  Any relevant docs would be helpful.


Use Postgres FDW:

https://www.postgresql.org/docs/current/postgres-fdw.html

to set up a link between the two?



Regards,
Seenu.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Out Of Memory

2022-07-19 Thread Adrian Klaver

On 7/19/22 06:16, ghoostin...@mail.com wrote:

Hello, I’m using Odoo Erp v12 Community and i want migrate to v14 and i have 
tablea content more 5m rows and after 8hr of execution it sho msg “out of 
memory” so i need your help


And we need information:

1) Detail how you are doing migration.

2) Are you doing this on the same machine?

3) OS and version.

4) Hardware specifications for machine e.g. memory, disk space, etc.


--
Adrian Klaver
adrian.kla...@aklaver.com




Re: How to handle failed COMMIT

2022-07-19 Thread David G. Johnston
On Tuesday, July 19, 2022, Håvar Nøvik  wrote:

> > Correct, the client did not get confirmation of commit success so it
> must operate as if it failed.
>
> I mean that’s the point, the client can’t operate as if it failed. It must
> operate as the state is unknown. But maybe that’s the correct application
> behaviour, just that I haven’t thought this through previously.
>

Right, since you sent commit there is now a non-zero chance the data is
committed but the client is unaware of that fact.

David J.


Re: pgsql 10.19 : "ERROR: cannot convert infinity to numeric" except there is no infinity

2022-07-19 Thread Tom Lane
"David G. Johnston"  writes:
> On Tuesday, July 19, 2022, Achilleas Mantzios 
> wrote:
>> ERROR:  cannot convert infinity to numeric

> The column cept.value contains an infinity.  I see nothing unusual in any
> of these queries given that fact.  If you try to cast the infinity to
> numeric it will fail.  If that doesn’t happen the query won’t fail.

FWIW, PG 14 and later do support infinity in the numeric type.

regards, tom lane




Re: pgsql 10.19 : "ERROR: cannot convert infinity to numeric" except there is no infinity

2022-07-19 Thread David G. Johnston
On Tuesday, July 19, 2022, Achilleas Mantzios 
wrote:

>
> ERROR:  cannot convert infinity to numeric
>
> -- has no problem testing against infinity
>
> select it.id ,cept.value::numeric as val, numrange(ceptl.min_alarm::
> numeric,ceptl.max_alarm::numeric
> ,'()') as range from items it, cept_report cept , dynacom.vessels vsl,
> machdefs md, cept_reportlimits ceptl wh
> ere it.id=cept.id AND md.defid=ceptl.defid AND it.defid=md.defid AND
> it.vslwhid=vsl.id AND vsl.vslstatus='Acti
> ve' and md.application = 'Critical Equipment Performance Test' AND
> cept.systemdate>= (now()-'1 year'::interval
> ) AND  cept.value='inf' ORDER BY 1;
> id | val | range
> +-+---
> (0 rows)
>
>
The column cept.value contains an infinity.  I see nothing unusual in any
of these queries given that fact.  If you try to cast the infinity to
numeric it will fail.  If that doesn’t happen the query won’t fail.

David J.


Re: How to handle failed COMMIT

2022-07-19 Thread David G. Johnston
On Monday, July 18, 2022, Håvar Nøvik  wrote:

>
> try
>execute transactional sql
> catch (commit failed)
>if (data is not stored)
>  // regard data as not stored


Correct, the client did not get confirmation of commit success so it must
operate as if it failed.

David J.


Re: About limitation of using postgresql in china

2022-07-19 Thread Tom Lane
pig lee  writes:
> But I’m not sure Whether there are some limitations when using  postgresql
> in china(limitation for China only).

There are no such restrictions so far as the Postgres project is concerned.
What the Chinese government might think about it, I don't know.

regards, tom lane




Out Of Memory

2022-07-19 Thread ghoostinger
Hello, I’m using Odoo Erp v12 Community and i want migrate to v14 and i have 
tablea content more 5m rows and after 8hr of execution it sho msg “out of 
memory” so i need your help




RE: pg_dump is filling C: drive up to 100 percent

2022-07-19 Thread Meera Nair
Hi Thomas,

This worked for me, thanks a lot.

Regards,
Meera

From: Thomas Boussekey 
Sent: Tuesday, July 19, 2022 5:33 PM
To: Meera Nair 
Cc: Abdul Qoyyuum ; 
pgsql-general@lists.postgresql.org
Subject: Re: pg_dump is filling C: drive up to 100 percent

External email. Inspect before opening.

Hello Meera,

Le mar. 19 juil. 2022 à 13:42, Meera Nair 
mailto:mn...@commvault.com>> a écrit :
Hi Abdul,

We do realize that. With tar format, is there a way to customize the path used 
for temporary local files?
Some way to configure another drive instead of using C:\?

I would try this solution: 
https://superuser.com/a/1448861/278835

Regards,
Meera

From: Abdul Qoyyuum 
mailto:aqoyy...@cardaccess.com.au>>
Sent: Tuesday, July 19, 2022 3:40 PM
To: Meera Nair mailto:mn...@commvault.com>>
Cc: 
pgsql-general@lists.postgresql.org
Subject: Re: pg_dump is filling C: drive up to 100 percent

External email. Inspect before opening.

Try dumping without tar format. 
https://dba.stackexchange.com/a/52730

On Tue, Jul 19, 2022 at 4:33 PM Meera Nair 
mailto:mn...@commvault.com>> wrote:
Hi team,

pg_dump  is filling C:\

This is for postgres version 12. Binary directory, data directory are in E:\
I'm redirecting pg_dump output also to E:\, I was taking a tar dump output.
But C:\ is getting filled up . Looks like it is used for some sort of temporary 
staging.
Is there a way not to use C:\ for this?

Regards,
Meera



--
Abdul Qoyyuum Bin Haji Abdul Kadir
HP No: +673 720 8043

HTH,
Thomas


Re: restore question

2022-07-19 Thread Ray O'Donnell

On 19/07/2022 11:11, Ronald Haynes wrote:

Hi folks,  I am a fairly novice postgresql user.



I have a backup file for a couple of small postgresql databases.    The 
backup file was created using postgresql 12.2, my Mac now has a 
postgresql 13.x or 14.x versions.   I would like to restore the setup 
from the backup file.




That looks like a dump file generated by pg_dumpall, so restoring it 
should be as simple as:


   psql -f backup-file.sql

You'll possibly need other connection parameters as required by psql, 
such as -U (the PostgreSQL user to use for connecting) or others.


HTH,

Ray.


--
Raymond O'Donnell // Galway // Ireland
r...@rodonnell.ie




Re: postgis

2022-07-19 Thread Mladen Gogala

On 7/18/22 13:08, Marc Millas wrote:

Hi,

I would like to install postgis 3.04 on a debian 11.

digging into various web sites, I didnt found the name of that packet.

can someone help ?

thanks

Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com 

Install alien and convert the .rpm package into the .deb package. When 
dealing with alien be careful, so that it doesn't jump and stick to your 
face. Bad things can happen then


--
Mladen Gogala
Database Consultant
Tel: (347) 321-1217
https://dbwhisperer.wordpress.com


Re: pg_dump is filling C: drive up to 100 percent

2022-07-19 Thread Thomas Boussekey
Hello Meera,

Le mar. 19 juil. 2022 à 13:42, Meera Nair  a écrit :

> Hi Abdul,
>
>
>
> We do realize that. With tar format, is there a way to customize the path
> used for temporary local files?
>
> Some way to configure another drive instead of using C:\?
>

I would try this solution: https://superuser.com/a/1448861/278835

>
>
> Regards,
>
> Meera
>
>
>
> *From:* Abdul Qoyyuum 
> *Sent:* Tuesday, July 19, 2022 3:40 PM
> *To:* Meera Nair 
> *Cc:* pgsql-general@lists.postgresql.org
> *Subject:* Re: pg_dump is filling C: drive up to 100 percent
>
>
>
> External email. Inspect before opening.
>
>
>
> Try dumping without tar format. https://dba.stackexchange.com/a/52730
> 
>
>
>
> On Tue, Jul 19, 2022 at 4:33 PM Meera Nair  wrote:
>
> Hi team,
>
>
>
> pg_dump  is filling C:\
>
>
>
> This is for postgres version 12. Binary directory, data directory are in
> E:\
>
> I’m redirecting pg_dump output also to E:\, I was taking a tar dump output.
>
> But C:\ is getting filled up . Looks like it is used for some sort of
> temporary staging.
>
> Is there a way not to use C:\ for this?
>
>
>
> Regards,
>
> Meera
>
>
>
>
>
>
> --
>
> Abdul Qoyyuum Bin Haji Abdul Kadir
>
> HP No: +673 720 8043
>

HTH,
Thomas


RE: pg_dump is filling C: drive up to 100 percent

2022-07-19 Thread Meera Nair
Hi Abdul,

We do realize that. With tar format, is there a way to customize the path used 
for temporary local files?
Some way to configure another drive instead of using C:\?

Regards,
Meera

From: Abdul Qoyyuum 
Sent: Tuesday, July 19, 2022 3:40 PM
To: Meera Nair 
Cc: pgsql-general@lists.postgresql.org
Subject: Re: pg_dump is filling C: drive up to 100 percent

External email. Inspect before opening.

Try dumping without tar format. 
https://dba.stackexchange.com/a/52730

On Tue, Jul 19, 2022 at 4:33 PM Meera Nair 
mailto:mn...@commvault.com>> wrote:
Hi team,

pg_dump  is filling C:\

This is for postgres version 12. Binary directory, data directory are in E:\
I'm redirecting pg_dump output also to E:\, I was taking a tar dump output.
But C:\ is getting filled up . Looks like it is used for some sort of temporary 
staging.
Is there a way not to use C:\ for this?

Regards,
Meera



--
Abdul Qoyyuum Bin Haji Abdul Kadir
HP No: +673 720 8043


pgsql 10.19 : "ERROR: cannot convert infinity to numeric" except there is no infinity

2022-07-19 Thread Achilleas Mantzios

dynacom=# select it.id ,cept.value::numeric as val, 
numrange(ceptl.min_alarm::numeric,ceptl.max_alarm::numeric
,'()') as range from items it, cept_report cept , dynacom.vessels vsl, machdefs 
md, cept_reportlimits ceptl wh
ere it.id=cept.id AND md.defid=ceptl.defid AND it.defid=md.defid AND 
it.vslwhid=vsl.id AND vsl.vslstatus='Acti
ve' and md.application = 'Critical Equipment Performance Test' AND 
cept.systemdate>= (now()-'1 year'::interval
) AND  cept.value < -100 ORDER BY 1;
id | val | range
+-+---
(0 rows)


select it.id ,cept.value::numeric as val, 
numrange(ceptl.min_alarm::numeric,ceptl.max_alarm::numeric
,'()') as range from items it, cept_report cept , dynacom.vessels vsl, machdefs 
md, cept_reportlimits ceptl wh
ere it.id=cept.id AND md.defid=ceptl.defid AND it.defid=md.defid AND 
it.vslwhid=vsl.id AND vsl.vslstatus='Acti
ve' and md.application = 'Critical Equipment Performance Test' AND 
cept.systemdate>= (now()-'1 year'::interval
) AND  cept.value::numeric>'-1' ORDER BY 1;
ERROR:  cannot convert infinity to numeric

-- has no problem testing against infinity

select it.id ,cept.value::numeric as val, 
numrange(ceptl.min_alarm::numeric,ceptl.max_alarm::numeric
,'()') as range from items it, cept_report cept , dynacom.vessels vsl, machdefs 
md, cept_reportlimits ceptl wh
ere it.id=cept.id AND md.defid=ceptl.defid AND it.defid=md.defid AND 
it.vslwhid=vsl.id AND vsl.vslstatus='Acti
ve' and md.application = 'Critical Equipment Performance Test' AND 
cept.systemdate>= (now()-'1 year'::interval
) AND  cept.value='inf' ORDER BY 1;
id | val | range
+-+---
(0 rows)

-- still has problem testing the range

select it.id ,cept.value::numeric as val, 
numrange(ceptl.min_alarm::numeric,ceptl.max_alarm::numeric
,'()') as range from items it, cept_report cept , dynacom.vessels vsl, machdefs 
md, cept_reportlimits ceptl wh
ere it.id=cept.id AND md.defid=ceptl.defid AND it.defid=md.defid AND 
it.vslwhid=vsl.id AND vsl.vslstatus='Acti
ve' and md.application = 'Critical Equipment Performance Test' AND 
cept.systemdate>= (now()-'1 year'::interval
) AND  numrange(ceptl.min_alarm::numeric,ceptl.max_alarm::numeric,'()') @> 
cept.value::numeric ORDER BY 1;
ERROR:  cannot convert infinity to numeric


-- no problem if the query goes into its barrier and the test done outside
with bar as (select it.id ,cept.value::numeric as val, 
numrange(ceptl.min_alarm::numeric,ceptl.max_a
larm::numeric,'()') as range from items it, cept_report cept , dynacom.vessels 
vsl, machdefs md, cept_reportli
mits ceptl where it.id=cept.id AND md.defid=ceptl.defid AND it.defid=md.defid 
AND it.vslwhid=vsl.id AND vsl.vs
lstatus='Active' and md.application = 'Critical Equipment Performance Test' AND 
cept.systemdate>= (now()-'1 ye
ar'::interval)  ORDER BY 1)
select * from bar where NOT range @> val;
-- good data here

--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt


restore question

2022-07-19 Thread Ronald Haynes
Hi folks,  I am a fairly novice postgresql user.



I have a backup file for a couple of small postgresql databases.    The backup 
file was created using postgresql 12.2, my Mac now has a postgresql 13.x or 
14.x versions.   I would like to restore the setup from the backup file.

 Psql Postgres followed by \l gives:

List of databases
 Name | Owner | Encoding | Collate | Ctype | Access privileges
---+---+--+-+---+-
 postgres | rhaynes74 | UTF8 | C | C |
 template0 | rhaynes74 | UTF8 | C | C | =c/rhaynes74 +
 | | | | | rhaynes74=CTc/rhaynes74
 template1 | rhaynes74 | UTF8 | C | C | =c/rhaynes74 +
 | | | | | rhaynes74=CTc/rhaynes74

The user that owns the databases in the backup do not currently exist.

Here is the header information in the backup file, what are the steps to 
proceed with the restore?

-
--
-- PostgreSQL database cluster dump
--

SET default_transaction_read_only = off;

SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;

--
-- Roles
--

CREATE ROLE postgres;
ALTER ROLE postgres WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN 
NOREPLICATION NOBYPASSRLS;
CREATE ROLE rhaynes;
ALTER ROLE rhaynes WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN REPLICATION 
BYPASSRLS;






--
-- Databases
--

--
-- Database "template1" dump
--

\connect template1

--
-- PostgreSQL database dump
--

-- Dumped from database version 12.2
-- Dumped by pg_dump version 12.2

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;

--
-- PostgreSQL database dump complete
--

--
-- Database "hockey1" dump
--

--
-- PostgreSQL database dump
--

-- Dumped from database version 12.2
-- Dumped by pg_dump version 12.2

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'SQL_ASCII';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;

--
-- Name: hockey1; Type: DATABASE; Schema: -; Owner: rhaynes
--

CREATE DATABASE hockey1 WITH TEMPLATE = template0 ENCODING = 'SQL_ASCII' 
LC_COLLATE = 'C' LC_CTYPE = 'C';


ALTER DATABASE hockey1 OWNER TO rhaynes;

\connect hockey1

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'SQL_ASCII';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;

SET default_tablespace = '';

SET default_table_access_method = heap;

--
-- Name: tradelistnew; Type: TABLE; Schema: public; Owner: rhaynes
--

CREATE TABLE public.tradelistnew (
 year character varying(80),
 manufacturer character varying(80),
 set character varying(80),
 card_number character varying(5),
 player_name character varying(80),
 quantitynrmt integer,
 quantityexmt integer,
 quantityex integer,
 quantityvg integer,
 quantitypfg integer,
 valuenrmt real,
 valueexmt real,
 valueex real,
 valuevg real,
 valuepfg real
);

Sincerely,

Dr. Ronald D. Haynes
Professor, Department of Mathematics and Statistics
Chair, MSc and Phd Scientific Computing Programs
Memorial University of Newfoundland

We acknowledge that the lands on which Memorial University’s campuses are 
situated are in the traditional territories of diverse Indigenous groups, and 
we acknowledge with respect the diverse histories and cultures of the Beothuk, 
Mi’kmaq, Innu, and Inuit of this province.


Re: pg_dump is filling C: drive up to 100 percent

2022-07-19 Thread Abdul Qoyyuum
Try dumping without tar format. https://dba.stackexchange.com/a/52730

On Tue, Jul 19, 2022 at 4:33 PM Meera Nair  wrote:

> Hi team,
>
>
>
> pg_dump  is filling C:\
>
>
>
> This is for postgres version 12. Binary directory, data directory are in
> E:\
>
> I’m redirecting pg_dump output also to E:\, I was taking a tar dump output.
>
> But C:\ is getting filled up . Looks like it is used for some sort of
> temporary staging.
>
> Is there a way not to use C:\ for this?
>
>
>
> Regards,
>
> Meera
>
>
>


-- 
Abdul Qoyyuum Bin Haji Abdul Kadir
HP No: +673 720 8043


Re: Setting up a server with previous day data

2022-07-19 Thread Abdul Qoyyuum
You could instead set up High Availability and use your secondary as actual
streamed and backed up database.
https://www.postgresql.org/docs/current/high-availability.html

This way, you get up-to-date data that you can query/generate reports with.

On Tue, Jul 19, 2022 at 3:22 PM Srinivasa T N  wrote:

> Hi All,
>I have a primary postgresql 12 server which is being continuously used
> for transaction processing.  For reporting purposes, I want to set up a
> secondary server which has got previous day data.  Everyday night, I want
> the data from primary to be shifted to secondary.  I can achieve this
> manually using pg_basebackup on primary and pg_restore on secondary.  Is
> there any other automated efficient way to achieve the same?  Any relevant
> docs would be helpful.
>
> Regards,
> Seenu.
>


-- 
Abdul Qoyyuum Bin Haji Abdul Kadir
HP No: +673 720 8043


Re: Getting the table ID

2022-07-19 Thread Walter Dörwald

On 19 Jul 2022, at 5:10, Igor Korot wrote:


Hi, guys,

In the database theory each table is identified as 
"schema_name"."table_name".


When I tried to look at how to get the table id inside the PostgreSQL,
I saw that I needed to look at the pg_class table.

SELECT oid FROM pg_class WHERE relname = "table_name";

However that query will give a non-unique table id (see the first 
sentence).


So how do I get the table id based on the "schema_name.table_name"?

There is a pg_namespace table - is this where the schema should come 
from?

If yes - how?
Looking at that table I don't see any unique fields...
Or is this something that is hidden?

In fact I'm trying to run following query:

SELECT c.oid FROM pg_class c, pg_namespace nc WHERE nc.oid =
c.relnamespace AND c.relname = ? AND nc.nspname = ?;

from my ODBC based program, but it returns 0 rows on SQLFetch.

I know PostgreSQL does not use '?' for query parameters
but I thought that since its an ODBC everything should work.

Nevertheless, all bindings were successful, but now rows are returned.

Is this query correct?

Thank you.


That's more or less the same query that I am using:

select
r.oid as oid,
n.nspname || '.' || r.relname as name
from
pg_catalog.pg_namespace n
join
pg_catalog.pg_class r on n.oid = r.relnamespace
where
(r.relkind = 'r') and
(n.nspname not like 'pg_%') and
(n.nspname != 'information_schema') and
(n.nspname = 'email') and
(r.relname = 'emailhistory')

Maybe your problem has to to with uppercase/lowercase schema and/or 
table names?


Servus,
   Walter


About limitation of using postgresql in china

2022-07-19 Thread pig lee
Dear postgresql member



I will use postgresql in china for a project.

But I’m not sure Whether there are some limitations when using  postgresql
in china(limitation for China only).

For example,license limitation in china or other limitation when used.


Can you tell me the postgresql limitations when used for china only.



Look forward to your Email.

Thank you very much.



Best Regards

Pengkun


Re: pg_receivewal/xlog to ship wal to cloud

2022-07-19 Thread Ganesh Korde
On Tue, 19 Jul 2022, 2:09 am Alan Hodgson,  wrote:

> On Mon, 2022-07-18 at 15:55 -0400, neslişah demirci wrote:
> > Hi all,
> >
> > Wondering if anyone has any experience of using pg_receivewal/xlog
> > to ship wal files to GCP/S3?
> >
>
> I use archive_command to send WAL to S3. It works fine. I do gzip
> them before uploading, as they are usually pretty compressible.
>
> I use a lifecycle rule on the bucket to automatically delete ones
> older than I might need.
>
> It's not the fastest process in the world, and replay on our
> reporting server does occasionally fall behind, but I think that's
> true of WAL replay in general as a serialized process.
>

Have a look at this also.

https://supabase.com/blog/2020/08/02/continuous-postgresql-backup-walg


pg_dump is filling C: drive up to 100 percent

2022-07-19 Thread Meera Nair
Hi team,

pg_dump  is filling C:\

This is for postgres version 12. Binary directory, data directory are in E:\
I'm redirecting pg_dump output also to E:\, I was taking a tar dump output.
But C:\ is getting filled up . Looks like it is used for some sort of temporary 
staging.
Is there a way not to use C:\ for this?

Regards,
Meera



How to handle failed COMMIT

2022-07-19 Thread Håvar Nøvik
I've been wondering what the behavior of postgres is when the server process 
stops, for whatever reason, at certain critical points in the execution flow.

In the following example the client will only regard the data as stored until 
the COMMIT command is successfully executed. But the the server, client or 
network may fail at any point during the execution and therefore the server and 
client may not be in sync of what the current state is.

BEGIN;
INSERT INTO ;
COMMIT;

To experiment with this I inserted a stupid if statement (see patch) which will 
make the server process exit(1) if the client sends a COMMIT command, but only 
after the COMMIT command has been processed on the server and just before the 
server send the close commend (wire protocol). I.e. the server has COMMITed the 
transaction, but the client just experiences that the connection has been 
closed for some reason:

server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.

So my question is, how should the client handle these situations? I think most 
systems will have some variation of the following:

try
   execute transactional sql
catch (commit failed)
   // regard data as not stored

But, this doesn't seem to be enough, so I guess you would have to do something 
like:

try
   execute transactional sql
catch (commit failed)
   if (data is not stored)
 // regard data as not stored

Thanks in advance.

/Håvar Nøvik

0001-exit-before-close-on-commit-command.patch
Description: Binary data


Setting up a server with previous day data

2022-07-19 Thread Srinivasa T N
Hi All,
   I have a primary postgresql 12 server which is being continuously used
for transaction processing.  For reporting purposes, I want to set up a
secondary server which has got previous day data.  Everyday night, I want
the data from primary to be shifted to secondary.  I can achieve this
manually using pg_basebackup on primary and pg_restore on secondary.  Is
there any other automated efficient way to achieve the same?  Any relevant
docs would be helpful.

Regards,
Seenu.


Re: Proposed Translations of Updated Code of Conduct Policy

2022-07-19 Thread Guillaume Lelarge
Hi,

Le mar. 19 juil. 2022 à 08:49, Stefan Fercot 
a écrit :

> Hi,
>
> There's a typo in the french translation. "consécutis" should be "
> consécutifs".
>
> And per consistency, I'd suggest to not mix "mandat" and "termes" to speak
> about the same thing. I believe it would be better to replace "(3) termes
> consécutis" by "(3) mandats consécutifs".
>
>
Agreed on both.


> --
> Kind Regards,
> Stefan FERCOT
> EDB: https://www.enterprisedb.com
>
> --- Original Message ---
> Le samedi 16 juillet 2022 à 7:24 PM, Lucie Šimečková <
> luciesimeck...@outlook.com> a écrit :
>
> Hello all,
>
>
>
> Following the recent update of the English version of the Code of Conduct
> that clarified the term limits of CoC committee members, the PostgreSQL
> Community Code of Conduct Committee has solicited translations of the
> updated CoC.
>
>
>
> The English version of the Policy is at:
>
> https://www.postgresql.org/about/policies/coc/
>
> The following translations were contributed by:
>
>
>
>- German translation provided by Andreas Scherbaum
>- French translation provided by Guillaume Lelarge
>- Hebrew translation provided by Michael Goldberg
>- Italian translation provided by Federico Campoli and reviewed by
>Tommaso Bartoli
>- Russian translation provided by Alexander Lakhin
>- Chinese translation provided by Bo Peng
>- Japanese translation provided by Tatsuo Ishii
>
>
>
> The proposed translations are attached as plain text and PDF files to this
> message.
>
> If you have any comments or suggestions for the proposed translations,
> please bring them to our attention no later than 11:59 PM UTC on Sunday, July
> 24, 2022. What time is that in my time zone?
> https://www.timeanddate.com/worldclock/converter.html?iso=20220724T23=1440
>
> Thank you!
>
> Lucie Šimečková
>
> PostgreSQL Community Code of Conduct Committee Member
>
>
>
>
>

-- 
Guillaume.


Re: Proposed Translations of Updated Code of Conduct Policy

2022-07-19 Thread Stefan Fercot
Hi,

There's a typo in the french translation. "consécutis" should be "consécutifs".

And per consistency, I'd suggest to not mix "mandat" and "termes" to speak 
about the same thing. I believe it would be better to replace "(3) termes 
consécutis" by "(3) mandats consécutifs".

--
Kind Regards,

Stefan FERCOT
EDB: [https://www.enterprisedb.com](https://www.enterprisedb.com/)

--- Original Message ---
Le samedi 16 juillet 2022 à 7:24 PM, Lucie Šimečková 
 a écrit :

> Hello all,
>
> Following the recent update of the English version of the Code of Conduct 
> that clarified the term limits of CoC committee members, the PostgreSQL 
> Community Code of Conduct Committee has solicited translations of the updated 
> CoC.
>
> The English version of the Policy is at:
>
> https://www.postgresql.org/about/policies/coc/
>
> The following translations were contributed by:
>
> - German translation provided by Andreas Scherbaum
> - French translation provided by Guillaume Lelarge
> - Hebrew translation provided by Michael Goldberg
> - Italian translation provided by Federico Campoli and reviewed by Tommaso 
> Bartoli
> - Russian translation provided by Alexander Lakhin
> - Chinese translation provided by Bo Peng
> - Japanese translation provided by Tatsuo Ishii
>
> The proposed translations are attached as plain text and PDF files to this 
> message.
>
> If you have any comments or suggestions for the proposed translations, please 
> bring them to our attention no later than 11:59 PM UTC on Sunday, July 24, 
> 2022. What time is that in my time zone? 
> https://www.timeanddate.com/worldclock/converter.html?iso=20220724T23=1440
>
> Thank you!
>
> Lucie Šimečková
>
> PostgreSQL Community Code of Conduct Committee Member

Migrating from Oracle - Implicit Casting Issue

2022-07-19 Thread David G. Johnston
On Monday, July 18, 2022, Karthik K L V  wrote:

> Hi Team,
>
> We are migrating from Oracle 12c to Aurora Postgres 13 and running into
> implicit casting issues.
>
> Oracle is able to implicitly cast the bind value of prepared statements
> executed from the application to appropriate type - String -> Number,
> String -> Date, Number -> String etc. when there is a mismatch b/w java
> data type and the column Datatype.
>
> For example: If the Datatype of a Column is defined as Number and the
> application sends the bind value as a String (with single quotes in the
> query) - Oracle DB is able to implicitly cast to Number and execute the
> query and return the results.
>
> The same is not true with Postgres and we are getting below exception
>
> *org.postgresql.util.PSQLException: ERROR: operator does not exist: bigint
> = character varying*
> *Hint: No operator matches the given name and argument types. You might
> need to add explicit type casts..*
>
> We found a Postgres Driver property - stringtype=unspecified which appears
> to solve this problem and have the following questions.
> https://jdbc.postgresql.org/documentation/83/connect.html
>
> Could you please let us know the following?
>
> Q1) Will configuring this stringtype property introduce overhead on
> Postgres leading to Performance issues
> Q2)Does setting this attribute have any other implications on the data in
> the DB.
> Q3)Is there any plan to deprecate / stop supporting this attribute in
> future Aurora Postgres releases.
>
>
That setting is not recognized by the server in any way, it is a driver
concern only.  IIUC it makes the Java Driver behave in a way consistent
with the expectations of the server since by leaving the supplied type info
undeclared the server can use its own logic.  If it works for you I say use
it, it will be less problematic than methodically fixing your queries up
front.  Though if there are some that show to be bottlenecks getting the
type info correct may prove to make a difference in some situations.

David J.


Migrating from Oracle - Implicit Casting Issue

2022-07-19 Thread Karthik K L V
Hi Team,

We are migrating from Oracle 12c to Aurora Postgres 13 and running into
implicit casting issues.

Oracle is able to implicitly cast the bind value of prepared statements
executed from the application to appropriate type - String -> Number,
String -> Date, Number -> String etc. when there is a mismatch b/w java
data type and the column Datatype.

For example: If the Datatype of a Column is defined as Number and the
application sends the bind value as a String (with single quotes in the
query) - Oracle DB is able to implicitly cast to Number and execute the
query and return the results.

The same is not true with Postgres and we are getting below exception

*org.postgresql.util.PSQLException: ERROR: operator does not exist: bigint
= character varying*
*Hint: No operator matches the given name and argument types. You might
need to add explicit type casts..*

We found a Postgres Driver property - stringtype=unspecified which appears
to solve this problem and have the following questions.
https://jdbc.postgresql.org/documentation/83/connect.html

Could you please let us know the following?

Q1) Will configuring this stringtype property introduce overhead on
Postgres leading to Performance issues
Q2)Does setting this attribute have any other implications on the data in
the DB.
Q3)Is there any plan to deprecate / stop supporting this attribute in
future Aurora Postgres releases.


-- 
Karthik klv