Re: encoding option when database backup

2024-01-02 Thread rob stone



On Wed, 2024-01-03 at 15:59 +0900, Taek Oh wrote:
> Hi there,
> 
> I would like to make an inquiry regarding the encoding option for the
> database backup.
> When I was using the previous version of PGADMIN 4, I had a
> dropbar for the encoding format for the backup option.
> But I recently started using the latest version of PGADMIN 4(V8.1),
> and I realized that the dropbar for encoding format has disappeared
> as we can observe from the attached image.
> Are there any solutions where I can activate the encoding dropbar?
> 
> Thank you in advance,
> 
> Taek
> 
> 


See https://www.pgadmin.org/docs/pgadmin4/latest/backup_dialog.html 
where it mentions:-

Use the Encoding drop-down listbox to select the character encoding
method that should be used for the archive.






Re: How to force "re-TOAST" after changing STORAGE or COMPRESSION?

2023-10-02 Thread rob stone
Hello,

On Tue, 2023-10-03 at 08:57 +0900, Michael Paquier wrote:
> On Mon, Oct 02, 2023 at 04:42:15PM +0200, Dominique Devienne wrote:
> > According to the doc, the table is NOT changed.
> > In my case, I DO want to have the bytea column rewritten
> > according to the new STORAGE and/or COMPRESSION.
> > The doc doesn't mention how to achieve that.
> 
> Yes, the compression type of the existed toasted values are not
> changed after an ALTER SET, and that's not something one would do on
> a
> daily basis, either.
> 
> > VACUUM?
> > VACUUM FULL?
> > Else?
> 
> VACUUM was considered as an option to force a rewrte, but it has been
> removed because we were not sure that this is the correct path to do
> so or that in some cases forcing the hand of the user was incorrect.
> It was also creating a penalty in some of the hot loops of area:
> commit: dbab0c07e5ba1f19a991da2d72972a8fe9a41bda
> committer: Michael Paquier 
> date: Mon, 14 Jun 2021 09:25:50 +0900
> Remove forced toast recompression in VACUUM FULL/CLUSTER
> 
> Related thread:
> https://postgr.es/m/20210527003144.xxqppojoiwurc...@alap3.anarazel.de
> --
> Michael


Would running CLUSTER on the table use the new parameters for the re-
write?

The docs about CLUSTER do not mention anything about toast'ed data. If
the toast data is huge it would be held in a separate file on disk. If
the toast column is being compressed, and its size is now less than the
value used as to whether a separate file is required or not, will the
outcome be a single row for the re-write and deletion of the "old"
toast'ed entry?

Robert





Re: Cast INTEGER to BIT confusion

2023-08-17 Thread rob stone
Hello,



On Tue, 2023-08-15 at 08:49 +, [Quipsy] Markus Karg wrote:
> 
> 
> 
> Hello PostgreSQL Community,
>  
> I have 25+ years of experience with some other RDBMS, but I am a
> PostgreSQL starter, so I assume the following is rather a simple
> beginner’s question…:
>  
> I like to store just a single bit but that can be either 1 or 0, so I
> tried to do this:
>  
> CREATE TABLE T (c BIT);
> INSERT INTO T VALUES (1);
> -- I MUST NOT change both lines in any way as these are part of a
> third-party application!
>  
> Unfortunately this tells me:
>  
> column "c" is of type bit but expression is of type integer
>  
> So I logged in as a cluster admin and I tried this:
>  
> CREATE CAST (integer AS bit) WITH INOUT AS IMPLICIT;
>  
> Unfortunately that tells me:
>  
> cast from type integer to type bit already exists
>  
> This is confusing! Apparently PostgreSQL 15.3 comes with the needed
> cast out-of-the-box but it does not apply it? This is confusing!
>  
> What is my fault?
>  
> Thanks!
> -Markus


It works thus:-

postgres 15.4 =# CREATE TABLE T (c BIT);
CREATE TABLE
postgres 15.4 =# insert into T values (B'1');
INSERT 0 1
postgres 15.4 =# select * from t;
 c 
---
 1
(1 row)

See section 9.6 in the doco.

HTH,
Rob





Re: PgSQL 15.3: Execution plan not using index as expected

2023-08-11 Thread rob stone
Hello,

> -
> -
>   PostgreSQL 9.3.4 on x86_64-unknown-linux-gnu, compiled by gcc
> (Debian 
> 4.8.3-2) 4.8.3, 64-bit
> 
> 
> -
> 
>   PostgreSQL 15.3 (Debian 15.3-1.pgdg120+1) on x86_64-pc-linux-gnu, 
> compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
> (1 Zeile)
> 
> 
9.3 plan
->  Index Scan using client_session_user_id_idx on client_session  

Looks like a collation issue given the difference in compilers used.
In the 9.3 plan the index is used.
Maybe try a reindex of the table.

HTH,
Rob





Re: Regd. the Query w.r.t Alternative functionalities from Oracle  PostgreSQL (Oracle to PostgreSQL database migration)

2022-12-28 Thread rob stone



On Wed, 2022-12-28 at 09:45 -0800, Adrian Klaver wrote:
> On 12/23/22 01:37, Chetan Kosanam wrote:
> > TCS Confidential
> > 
> > 
> 
> See here:
> 
> https://www.postgresql.org/support/professional_support/
> 
> 

Also there is ora2pg. See https://ora2pg.darold.net/






Re: tablesample clause doesnt support to be applied to subquery.

2022-08-08 Thread rob stone
Hello,

On Mon, 2022-08-08 at 15:52 +0800, Wind wrote:
> hi~
> I test "tablesample" feature on pg9.5 and found that tablesample
> doesnt support to be applied to subquery.
> for example this sql doesnt work on pg9.5:
> ```
> create table t1(
>     c1 int,
>     c2 int
> );
> select c2 from (select * from t1) as t2 tablesample system (50);
> ```
> while i read sql2003 standard about tablesample and think it should
> allow this usage according to standard.
> is it a deviation to sql standard?


Version 9.5 is not supported. The oldest supported version is 10, the
current version is 14.

Please read chapter 58 in the doco.

HTH,





Re: Nested Schemata, in a Standard-Compliant Way?

2021-09-28 Thread rob stone
Hello Ray,

On Tue, 2021-09-28 at 09:24 -0400, Raymond Brinzer wrote:
> Greetings.
> 
> 
> 
> I'm wondering whether such a feature could be added, without breaking
> either existing code, or compliance with the SQL standard.  For
> instance, borrowing :: from languages like Ruby and Perl:
> 
> SELECT * FROM ::projects::contacts::people;  -- Absolute path
> cd ::projects;   -- Session-specific
> SELECT * FROM contacts::people;  -- Relative path
> 

Double colons are used for casting.
E.g., $1::INET or $1::INTEGER where $1 is a string.

What you are after are sub schemas.

set schema projects;
set subschema contacts;
select * from people;

I don't know enough about the catalogue tables to know if a tree
structure for schemas is possible, the amount of work involved or even
if there is a valid use case.

My 2 cents.






Re: SQL queries as sets: was The tragedy of SQL

2021-09-15 Thread rob stone


> 
> Rich, could you please elaborate on SQL queries being based on sets?
> I
> never thought of it that way, and would like to hear your related
> thoughts.
> 
When Codd & Date elaborated the relational model, it was based on set
theory.
You have sets of data. Is there a relationship between them? If so, how
do you express it? SQL came to exist.

My two cents.


> SteveT
> 
> Steve Litt 
> Spring 2021 featured book: Troubleshooting Techniques of the
> Successful
> Technologist http://www.troubleshooters.com/techniques
> 
> 





Re: Basic Questions about PostgreSQL.

2021-09-15 Thread rob stone



On Wed, 2021-09-15 at 05:10 +, A Z wrote:
> -In PostgreSQL, what is the difference between Roles, Groups, and
> Policies exactly? What are they each, how do they relate to one
> another, and which one(s) can be assigned to another?
> 
> -Is there a simple, immediate, straightforward way to temporarily
> disable PostgreSQL database
> use of all other users, except development users, or in fact just one
> user (one developer, yourself)
> so that maintenance on the data or the data model may be performed?
> 

Read
https://www.postgresql.org/docs/13/app-postgres.html#id-1.9.5.14.6.5

about single user mode.
You shut down the cluster and run psql in single user mode.
After you quit out of single user mode, restart.

HTH,
Rob






Re: update non-indexed value is slow if some non-related index/fk are enabled

2021-09-06 Thread rob stone



On Sun, 2021-09-05 at 11:21 -0400, Tom Lane wrote:
> "David G. Johnston"  writes:
> > On Friday, September 3, 2021, Philippe Doussot <
> > philippe.dous...@up.coop>
> > wrote:
> > > I don't understand why disabling all index from the table speed
> > > up the
> > > update because the boolean column is not indexed
> 
> > Index entries point to physical records.  You just deleted one
> > physical
> > record and added another.  The indexes need to be updated with that
> > information.
> 
> Yeah.  The OP's mental model is apparently update-in-place, but
> that's
> not how Postgres does things.
> 
> The index-update overhead is avoided if the update is "HOT", which
> requires that (a) no indexed column changes and (b) there is room
> on the same page for the new copy of the row.  Ensuring (b) requires
> running with a fairly low fill-factor, which bloats your table and
> thereby creates its own costs.  Still, that might be worth doing
> depending on your particular circumstances.
> 
> regards, tom lane
> 
> 
If the DDL for that table had the column defined like this:-

my_boolean  BOOLEAN,

instead of:-

my_boolean  BOOLEAN NOT NULL DEFAULT FALSE/TRUE, (whichever is
convenient)

then that column would contain either 'f' or 't' on insert instead of
null.

Then even if a fillfactor was not specified for that table, an update
of that single column (which does not appear in an index) would merely
swap the values.
Surely that would write it back in place?

Also, having boolean columns containing a null makes it difficult for
the getter's of that table deciding if 'null' is true or false.

Just an observation.

Rob






Re: php connection failure

2021-08-11 Thread rob stone
Hello,

On Wed, 2021-08-11 at 11:09 +, ourdiaspora wrote:
> 
> 
> The connection to postgresql seems OK now (thank you), but the php
> web page continues to fail to connect to the database:
> 
> "
> 
> 
> Generic CPAC database
> 
> 
>  $dbconn = pg_connect("dbname=cpacweb
> user=cpaca host=localhost") or die("Could not connect");
> $stat = pg_connection_status($dbconn);
>  if ($stat === PGSQL_CONNECTION_OK) {
>  echo 'Connection status ok';
>  } else {
>  echo 'Connection status bad';
>  }
> ?>
> 
> 
> "
> Returns:
> 
> "
> Generic CPAC database Could not connect
> "
> 

I don't know which version of PHP you are running, but can you re-write
the call to pg_connect as a try . . . catch block so that the exact
error message is returned instead of just a "die"?

Are you sure postgres is configured to use port 5432? There is no
PGPORT environment variable set?

Why are you starting postgres with a -c option and no parameters
following?

I assume you know that version 9.6 of postgres is nearly at EoL.


HTH,
Rob







Re: JWT decoder

2021-08-08 Thread rob stone
Hello,

On Mon, 2021-08-09 at 02:16 +, Masih Tavassoli wrote:
> Hi experts,
> 
> I am trying to find a way to decode a URL request header and extract
> its JSON fields in postgreql .
> 
> I can do this in Oracle sql using 
> 
> select    (utl_raw.cast_to_varchar2 (utl_encode.base64_decode(
> utl_raw.cast_to_raw (regexp_replace ( (
> 'eyJhbGciOiJSUzI1NiIsImtpZCI6InBhLWFwaS1zaWduIn0.eyJzY29wZSI6W10sInN1
> YiI6Imp1bGllQHNhcHBoaXJlY29udmV5YW5jaW5nLm5ldC5hdSIsImNsaWVudCI6InB4Y
> V9hY19zZXR0c3BsdXMiLCJ1c2VybmFtZSI6Imp1bGllQHNhcHBoaXJlY29udmV5YW5jaW
> 5nLm5ldC5hdSIsImV4cCI6MTU1NDA4NzU2N30.OZwVETCP+HUDsSkudCfa52eQT2Fyt6B
> gefFpYr5xpJROjZJeVnXtzsU6MS8aHauqyyOwQSz8iG6OC2YjYgrGiZ8YTwTh1uHptNji
> cyCFmCkh++d0oGdK78ZOKIAvuslW3GX4V13hPUJADzGLkyO8hy01YF9Id4+7GrvmnRRkw
> PO7C2eUvKTB8pxDAA94Jpw74s4BiNTmm1Uv+56AXqdI8m2KFkLNwGoUR3OSNvsifl5uxB
> sGGmP4wbgNe/Dy1q2UsL2eBAW/GuWryHGCRz1/PR2jIV1v21ClQ8YLiFAzM3YxHb8m2ng
> zRynKof+Nw3nZvuafRQ16MV5THz0ahGFYLA'
> ), '[[:space:]]', '') from dual
> 
> 
> 
> But there doesn't seem to be a way doing it in postgres.
> 
> Has anyone got any suggesions?
> 
> Thanks
> Masih
> 


Have a look at the orafce extension on github.
It handles "select from dual", etc.

Cheers,
Rob







Re: optimization issue

2021-07-10 Thread rob stone
Hello Ken,

On Fri, 2021-07-09 at 13:27 -0500, Kenneth Marshall wrote:
> On Fri, Jul 09, 2021 at 01:16:16PM +1000, rob stone wrote:
> > Hello,
> > 
> > I am curious.
> > 
> > NVL, DECODE and SELECT FROM dual are Oracle methods and these
> > appear in
> > your code.
> > 
> > How did you make these work in Postgres? 
> > 
> > Cheers,
> > Rob
> 
> Hi Rob,
> 
> At a guess, they are using the Orafce PostgreSQL extension. It
> provides
> a lot of compatibility functions.
> 
> Regards,
> Ken


Thanks. Found it on github.

Regards,
Rob






Re: optimization issue

2021-07-08 Thread rob stone
Hello,

I am curious.

NVL, DECODE and SELECT FROM dual are Oracle methods and these appear in
your code.

How did you make these work in Postgres? 

Cheers,
Rob







Re: How to install PostgreSQL binaries on a different directory than the default one

2021-04-02 Thread rob stone
Hello,

On Fri, 2021-04-02 at 22:01 +, Allie Crawford wrote:
> Hi,
> I am new in postgresql and I am trying to install PostgreSQL on Linux
> server but instead of using the default location for the binaries I
> want to install them in a different directory.
> I have research for a while and I cannot find any information on how
> to do that using the dnf utility which is what the PostgreSQL website
> gives me as instruction when I go the download page.
>  
> I have a directory called /pg01 that I want to use to install the
> binaries. The postgreSQL download site (
> https://www.postgresql.org/download/linux/redhat/ ) gives me the
> following steps to install the binaries:
>  
>  
> That set of instructions shown right above, installs the binaries
> under the/usr/pgsql-13, and what I want is to install the binaries
> under /pg01 instead of /usr.
> My question is, what are the changes I need to do in the instructions
> detailed right above, so that I can get the binary installation under
> the /pg01 directory instead of the default /usr directory.
>  
> Thanks in advance for any help you can five me on this.
>  
> Regards,
> Allie

Most Linux distros keep the binaries by major version. So when you
install version 14 it will go into /usr/pgsql-14.
Thus, by using different port numbers you can run multiple versions on
the same box.
Useful for development, QA, etc.

HTH,
Robert






Re: libpq and mysterious "invalid byte sequence for encoding UTF8".

2021-02-01 Thread rob stone
Hello,

On Mon, 2021-02-01 at 18:03 +0100, Jiří Pavlovský wrote:
> 
> 
> Thanks for the answer.
> 
> 
> I don't think triggers or constraints are the issue.
> 
> The exact same code works if I create a small test program with the
> same 
> query and the same parameters.
> 
> But when used in the context of the whole application it gives the
> error.
> 
>   I don't know what should I try. I tried all possible combinations
> of 
> parameters.
> 
> I have just even tried to disable triggers and remove any constraints
> from the table. Same error.
> 
Columns:- maincontact boolean DEFAULT false,
publdatacontact boolean DEFAULT false,
invcontact boolean DEFAULT false,
queries_recipient boolean,
fakelastname boolean NOT NULL DEFAULT false,

are defined as booleans and all except one have defaults declared, yet
you are passing NULL's which the insert tries to cast to boolean.
Maybe this is the cause of the error??

My 2 cents worth.








Re: Strange (and good) side effect of partitioning ?

2021-01-14 Thread rob stone
Hi,

On Thu, 2021-01-14 at 20:48 +, Phil Florent wrote:
> Hi,
> 
> I read that on Jonathan Lewis' blog :
> 
> (I believe that there may be some RDBMS which will treat (e.g.) “X
> between 20 and 10” as being identical to“X between 10 and 20” )
> 
> I am puzzled. PostgreSQL seems NOT to treat X between 20 and 10” as
> being identical to“X between 10 and 20" but it's complicated.
> 
> Here is my test case:
> 
> select version();                                                   
>          version                                                    
>          
> -
> -
>  PostgreSQL 13.1 (Ubuntu 13.1-1.pgdg20.04+1) on x86_64-pc-linux-gnu,
> compiled by gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, 64-bit
> 
> create table t1 (rn integer , object_name text) partition by
> range(rn);
> 
> create table t1a partition of t1 for values from (1) to (50001);
> 
> 
> \d+ t1                                         Table partitionnée «
> public.t1 »
>    Colonne   |  Type   | Collationnement | NULL-able | Par défaut |
> Stockage | Cible de statistiques | Description
> -+-+-+---++--
> +---+-
>  rn          | integer |                 |           |            |
> plain    |                       |
>  object_name | text    |                 |           |            |
> extended |                       |
> Clé de partition : RANGE (rn)
> Partitions: t1a FOR VALUES FROM (1) TO (50001)
> 
> 
> 
> insert into t1 select                                    rownum  rn,
>         upper(md5(random()::text)) object_name
> from
>         (select generate_series(1,5) rownum) serie
> ;
> 
> explain analyze select  object_namefrom    t1
> where
>         rn between 20 and 10
> ;
>                                      QUERY PLAN                      
>              
> -
> ---
>  Result  (cost=0.00..0.00 rows=0 width=0) (actual time=0.002..0.002
> rows=0 loops=1)
>    One-Time Filter: false
>  Planning Time: 0.116 ms
>  Execution Time: 0.020 ms
> 
> It's OK but:
> 
> explain analyze select  object_namefrom    t1a
> where
>         rn between 20 and 10
> ;
>                                            QUERY PLAN                
>                            
> -
> 
>  Seq Scan on t1a  (cost=0.00..1167.00 rows=1 width=33) (actual
> time=6.553..6.553 rows=0 loops=1)
>    Filter: ((rn >= 20) AND (rn <= 10))
>    Rows Removed by Filter: 5
>  Planning Time: 0.092 ms
>  Execution Time: 6.573 ms
> 
> At first I thought it was related to partition pruning but:
> 
> set enable_partition_pruning = false;
> 
> explain analyze select  object_namefrom    t1
> where
>         rn between 20 and 10
> ;
> 
>                                      QUERY PLAN                      
>              
> 
>  Result  (cost=0.00..0.00 rows=0 width=0) (actual time=0.002..0.003
> rows=0 loops=1)
>    One-Time Filter: false
>  Planning Time: 0.104 ms
>  Execution Time: 0.021 ms
> 
> 
> Confirmation since I still obtain "One-Time Filter: false" if I don't
> filter on the partition key:
> 
> create table t2 (rn integer , rn2 integer, object_name text)
> partition by range(rn);
> 
> create table t2a partition of t2 for values from (1) to (50001);
> 
> d+ t2                                         Table partitionnée «
> public.t2 »
>    Colonne   |  Type   | Collationnement | NULL-able | Par défaut |
> Stockage | Cible de statistiques | Description
> -+-+-+---++--
> +---+-
>  rn          | integer |                 |           |            |
> plain    |                       |
>  rn2         | integer |                 |           |            |
> plain    |                       |
>  object_name | text    |                 |           |            |
> extended |                       |
> Clé de partition : RANGE (rn)
> Partitions: t2a FOR VALUES FROM (1) TO (50001)
> 
> insert into t2 select                                               
>   rownum  rn, rownum rn2,
>         upper(md5(random()::text)) object_name
> from
>         (select generate_series(1,5) rownum) serie
> ;
> 
> explain analyze select  object_namefrom    t2
> where
>         rn2 between 20 and 10
> ;
> 
> 
>                                      QUERY PLAN                      
>              
> 
>  Result  (cost=0.00..0.00 rows=0 width=0) (actual time=0.002..0.003
> rows=0 loops=1)
>    One-Time Filter: false
>  Planning Time: 0.185 ms
>  

Re: Could postgres12 support millions of sequences? (like 10 million)

2020-03-20 Thread rob stone
Hello,

On Thu, 2020-03-19 at 14:36 -0700, pabloa98 wrote:
> Hello,
> 
> My schema requires a counter for each combination of 2 values.
> Something like:
> 
> CREATE TABLE counter(
> group INT NOT NULL,
> element INT NOT NULL,
> seq_number INT NOT NULL default 0,
> CONSTRAINT PRIMARY KEY (group, element)
> );
> 
> For each entry in counter, aka for each (group, element) pair, the
> model requires a seq_number.
> 
> If I use a table "counter", I could still have counter collisions
> between 2 transactions. I need truly sequence behavior. Is that
> possible by using a table like "counter" table, where the counter
> could be increased out of the transaction so it performs as a
> sequence without having race conditions between concurrent
> transactions?
> 
> The other option is to create sequences for each new pair of (group,
> element) using triggers. There are millions of pairs. So this
> approach will generate millions of sequences.
> 
> How a PostgreSQL database would behave having millions of sequences
> in a schema? Would it degrade its performance? Is there any negative
> impact?
> 
> Regards
> 
> Pablo
> 
> 
> 


Have you tried the following:-

BEGIN;
SELECT (seq_number + 1) FROM counter WHERE group = $1 
AND element = $2 FOR UPDATE; 

If that gives a NOTFOUND exception, then you need to insert into table
counter using a seq_number = 1, or update it with (seq_number + 1).
Then do the insert into whatever table needs the seq_number value.

If no errors detected, then COMMIT else ROLLBACK.
It is all in one transaction block.
We use this approach to assign invoice numbers.

You could pre-seed table counter when a group/element pair is created
for the first time.

HTH,
Robert






Re: Need to find the no. of connections for a database

2020-02-27 Thread rob stone
Hello,

On Fri, 2020-02-28 at 01:10 +, sivapostg...@yahoo.com wrote:
> Then clearly I've misunderstood what advisory lock could do.   We
> used to put locks in SQL server to avoid deadlock situations.  I
> thought advisory lock is a similar one.   [ New to Postgres ]
> 
> The report is arrived from around 10 tables out of 300 tables that
> are in the database.  Once we start this process, we need to ensure
> that no other user could enter any data in those 10 tables, at least
> for the processing period.  I thought the table lock [ those 10
> tables ] will ensure no entry.  
> 
> We have a menu like this in our application
> 
> Purchase Entry
> Sales Entry
> Sales Cancellation
> Report
> Processing report
> 
> When we enter the Processing report and click process, we need to
> ensure that no one could enter data from Purchase Entry, Sales Entry,
> Sales Cancellation, etc.
> 

If the menu is built from a table in your database, then when
"Processing report" starts you could set a flag (boolean) against those
items so that if anybody tried to log-in or access those items, you
could simply display a message along the lines of "Processing report is
running. Please try again later".

When "Processing report" finishes, it just clears that flag.

HTH,
Rob






Re: deallocate statement failing

2020-02-06 Thread rob stone



On Fri, 2020-02-07 at 00:48 -0500, Tom Lane wrote:
> rob stone  writes:
> > 2020-02-07 15:42:55 AEDT ERROR:  syntax error at or near
> > "'AIT1581050529'" at character 13
> > 2020-02-07 15:42:55 AEDT STATEMENT:  DEALLOCATE  'AIT1581050529'
> > The deallocate statement fails. It doesn't matter if it is just
> > "DEALLOCATE" or "DEALLOCATE PREPARE".
> 
> I think you want double quotes not single quotes.  The argument
> of DEALLOCATE is an SQL identifier, not a string literal.
> 
>   regards, tom lane


That worked. Thanks.







deallocate statement failing

2020-02-06 Thread rob stone
Hello,

Version:-


PostgreSQL 12.1 (Debian 12.1-2) on x86_64-pc-linux-gnu, compiled by gcc
(Debian 9.2.1-22) 9.2.1 20200104, 64-bit

Log:-

2020-02-07 15:42:55 AEDT LOG:  statement: SELECT COUNT(*) AS ps_count
FROM pg_prepared_statements WHERE name = 'AIT1581050529'
2020-02-07 15:42:55 AEDT ERROR:  syntax error at or near
"'AIT1581050529'" at character 13
2020-02-07 15:42:55 AEDT STATEMENT:  DEALLOCATE  'AIT1581050529'


The deallocate statement fails. It doesn't matter if it is just
"DEALLOCATE" or "DEALLOCATE PREPARE".
We put in the count statement to make sure the prepared statement still
existed and only issued the "DEALLOCATE" if ps_count is greater then
zero.

Does anybody know why it is giving this syntax error?

TIA,
Rob






pg_prepared_statements

2020-01-20 Thread rob stone
Hello,

PostgreSQL 11.6 (Debian 11.6-2~sid1) on x86_64-pc-linux-gnu, compiled
by gcc (Debian 9.2.1-21) 9.2.1 20191130, 64-bit

Is there something awry with the pg_prepared_statements view?
This is the only row found and it is NOT created by the application.

namestatement   prepare_timeparameter_types from_sql
S_1 ROLLBACK2020-01-21 12:35:35.031257  {}  false


In the development environment every statement is logged, however for
some reason the DEALLOCATE's never appear in the logs. This could be
caused by the view not recording the prepared statement, as the
DEALLOCATE is only made if the prepared statement name exists in the
view. 

max_prepared_transactions = 6 is the value set in postgresql.conf.


TIA,
Robert






Re: jsonb_set() strictness considered harmful to data

2019-10-23 Thread rob stone
Hello,

On Wed, 2019-10-23 at 20:33 +0200, Peter J. Holzer wrote:
> 
> I grant that SQL NULL takes a bit to get used to. However, it is a
> core
> part of the SQL language and everyone who uses SQL must understand it
> (I
> don't remember when I first stumbled across "select * from t where c
> =
> NULL" returning 0 rows, but it was probably within the first few days
> of
> using a database). And personally I find it much easier to deal with
> concept which are applied consistently across the whole language than
> those which sometimes apply and sometimes don't seemingly at random,
> just because a developer thought it would be convenient for the
> specific
> use-case they had in mind.
> 
> hp
> 

>From the JSON spec:-

3.  Values

   A JSON value MUST be an object, array, number, or string, or one of
   the following three literal names:

  false
  null
  true

   The literal names MUST be lowercase.  No other literal names are
   allowed.

So, you can't set a value associated to a key to SQL NULL. If a key
should not have a value then delete that key from the JSON.

If you decide your application is going to use one of those three
literal names, then you need to code accordingly. 

My 2 cents.






Re: jsonb_set() strictness considered harmful to data

2019-10-20 Thread rob stone
Hello,

On Sun, 2019-10-20 at 18:51 -0400, Andrew Dunstan wrote:
> On 10/20/19 4:18 PM, Tomas Vondra wrote:
> > 
> >https://www.postgresql.org/docs/12/functions-json.html
> > 
> > but that says nothing about how jsonb_set works with NULL values :-
> > (
> 
> 
> We should certainly fix that. I accept some responsibility for the
> omission.
> 
> 
> 

FWIW, if you are able to update the documentation, the current JSON RFC
is 8259.

https://tools.ietf.org/html/rfc8259


Cheers,
Rob








Re: "Failed to connect to Postgres database"

2019-09-26 Thread rob stone
Hi,

On Thu, 2019-09-26 at 16:21 +0200, Marco Ippolito wrote:
> 
> db:
>   type: postgres
>   datasource: host=localhost port=5433 user=fabmnet_admin   
>   password=password dbname=fabmnetdb sslmode=verify-full
> 

> 
> (base) marco@pc:~/fabric/fabric-ca$ fabric-ca-server start -b 
> admin:adminpw
> 2019/09/26 15:56:50 [INFO] Configuration file location: /home/marco
> /fabric/fabric-ca/fabric-ca-server-config.yaml
> 2019/09/26 15:56:50 [INFO] Starting server in home directory: 
> /home/marco/fabric/fabric-ca
> 2019/09/26 15:56:50 [INFO] Server Version: 1.4.4
> 2019/09/26 15:56:50 [INFO] Server Levels: &{Identity:2 Affiliation:1 
> Certificate:1 Credential:1 RAInfo:1 Nonce:1}
> 2019/09/26 15:56:50 [INFO] The CA key and certificate already exist
> 2019/09/26 15:56:50 [INFO] The key is stored by BCCSP provider 'SW'
> 2019/09/26 15:56:50 [INFO] The certificate is at: /home/marco/fabric
> /fabric-ca/ca-cert.pem
> 2019/09/26 15:56:50 [WARNING] Failed to connect to database
> 'fabmnetdb'
> 2019/09/26 15:56:50 [WARNING] Failed to connect to database
> 'postgres'
> 2019/09/26 15:56:50 [WARNING] Failed to connect to database
> 'template1'
> 2019/09/26 15:56:50 [ERROR] Error occurred initializing database:
> Failed
> to connect to Postgres database. Postgres requires connecting to a 
> specific database, the following databases were tried: [fabmnetdb 
> postgres template1]. Please create one of these database before 
> continuing


Why is it trying to connect to *any* database?

In the fabric-ca docs it shows the connection string as a single line
but your configuration file has it split over two lines.
My uneducated guess is that it is ignoring the 'password=password
dbname=fabmnetdb sslmode=verify-full'
line and thus unable to connect to fabmnetdb.

Cheers,
Robert






Re: How to get timezone offset in timestamp with time zone AT TIME ZONE output.

2019-09-24 Thread rob stone



On Wed, 2019-09-25 at 10:12 +1000, Paul McGarry wrote:
> 
> On Wed, Sep 25, 2019 at 5:44 AM Adrian Klaver <
> adrian.kla...@aklaver.com> wrote:

> > 
> Andrew's function seems plausible and I need to find some time to
> test it, but I am slightly surprised there isn't a native way to get
> the output, as it seems like something that would be fairly useful
> when dealing with dates.
> 
> Perhaps another avenue would be some sort of getOffset function, eg
> 
> getOffset('2019-09-25 02:00:00+00','Australia/Sydney')
> that would return +11 (or just 11).
> 
> Presumably PostgreSQL must have some internal functionality like that
> because it can do that math on the datetimes, but it doesn't seem to
> be exposed to users.
> 
> Thanks all for your input.
> 
> Paul
> 

You can EXTRACT timezone, timezone_hour and timezone_minute from a
timestamp. Using 'timezone' returns a value representing seconds from
UTC.
Maybe you could just concatenate the date/time with the extracted
offset in your select statement.

HTH,
Robert







Re: PGPASSWORD in crypted form, for example BlowFish or SHA-256

2019-09-19 Thread rob stone



On Thu, 2019-09-19 at 15:23 +0200, Matthias Apitz wrote:
> El día Thursday, September 19, 2019 a las 10:31:01PM +1000, rob stone
> escribió:
> 
> > 
> > https://www.postgresql.org/docs/11/auth-password.html
> > 
> > Chapters 20.5 and 20.6 may give you more information.
> 
> The form of the password hash store in the PG server or interchange
> over
> the network is not my question. The question is more: When the Linux
> server starts and with this the (ESQL/C written) application servers
> are
> starting, they need the password to connect and this is not provided
> at
> this moment from some keyboard or humanbeing. It must be stored on
> the
> server and available in clear for the server, but not for other eyes
> on
> the server, i.e. the place of the sorage must be ciphered.
> 
>   matthias
> 

Sorry. More caffeine needed.

If you use pg_service.conf you could write a little program to encrypt
the password and store it in this file in its encrypted form.
Then you application obtains the connection credentials from
pg_service.conf, de-encrypts the password and is then able to form the
connection string to access the required database.

HTH,
Robert







Re: PGPASSWORD in crypted form, for example BlowFish or SHA-256

2019-09-19 Thread rob stone
Hello,

On Thu, 2019-09-19 at 12:30 +0200, Matthias Apitz wrote:
> Hello,
> 
> Our software, a huge ILS, is running on Linux with DBS Sybase. To
> connect to the Sybase server (over the network, even on localhost),
> credentials must be known: a user (say 'sisis') and its password.
> 
> For Sybase we have them stored on the disk of the system in a file
> syb.npw as:
> 
> $ cat /opt/lib/sisis/etc/syb/syb.npw
> sisis:e53902b9923ab2fb
> sa:64406def48efca8c
> 
> for the user 'sisis' and the administrator 'sa'. Our software has as
> shared library a blob which knows how to decrypt the password hash
> above
> shown as 'e53902b9923ab2fb' into clear text which is then used in the
> ESQL/C or Java layer to connect to the Sybase server.
> 
> For PostgreSQL the password must be typed in (for pgsql) or can be
> provided in an environment variable PGPASSWORD=blabla
> 
> Is there somehow an API in PG to use ciphered passwords and provide
> as a
> shared library the blob to decrypt it? If not, we will use the
> mechanism same as
> we use for Sybase. Or any other idea to not make detectable the
> credentials? This was a request of our customers some years ago.
> 
>   matthias
> 
> 


https://www.postgresql.org/docs/11/auth-password.html

Chapters 20.5 and 20.6 may give you more information.

HTH,
Robert






Re: How to access Postgres .pgpass file from php?

2019-09-08 Thread rob stone
Hello Howard,

On Sat, 2019-09-07 at 22:17 +, Howard Wells wrote:
> I have my php files in the web root, and the Postgres 10 logon
> credentials are in the php file.  I want to put them outside the web
> root for security, because a malicious robot could easily read the
> credentials. 
> 
> After research, I found the .pgpass file.  That looks like the ideal
> solution, but after even more research, I haven't found how to use
> that file from a php logon script. 
> 
> Here is the section from my php script:
> 
> $dsn =
> vsprintf('pgsql:host=%s;port=%s;dbname=%s;user=%s;password=%s', [
> 'host' => '000.00.00.00',
> 'port' => '5432',
> 'dbname' => '[dbname]',
> 'user' => '[username]',
> 'password' => '[password]',
> ]);
> 
> Currently I store the real dbname, user and password in the php.  My
> questions are:
> 
> 1.How can I access it from the .pgpass file instead? 
> 
> 2.Where is .phpass loccated in Apache2 Ubuntu 18.04?
> 
> Thanks for any help with this. 
> 
> Howard
> 
> 
> 

We use pg_service.conf to hold the access credentials.
It's just pg_connect("service=sandbox") for example, where 'sandbox' is
the tag for the database you wish to access.
Using Debian it is kept in the /etc/php/7.3/apache2 path, where 7.3
represents the php version. It needs to be owned by the Apache user
which defaults to 'www-data'. Permissions are 0600.

HTH,
Robert







Re: A 3 table join question

2019-08-16 Thread rob stone
Hello,

On Fri, 2019-08-16 at 07:39 -0400, stan wrote:
> First let me say a huge THANK YOU to all the helpful people that
> hanging out
> on this.
> 
> I am changing from one type of work, going back to some database work
> for a
> project, as my old job was eliminated. I have made great progress on
> this,
> thanks to the time and effort of lots of folks from this list.
> 
> Now, here is my latest stumbling block. I have three "data streams"
> that all
> contribute to the total costs of a project:
> 
> * labor cost
> * material cost
> * expense report cost
> 
> I have a view that summarizes the burdened cost from each of these 3
> streams, and i am trying to create a view that shows the total
> project cost.
> 
> Here is the test data from each of the 3 streams:
> 
> stan=> select * from labor_cost_sum_view ;
>  proj_no | labor_bill_sum | labor_cost_sum 
>  -++
>45 | 10810. |  3133.1750
>   764 |  8712. |   810.7500
>   789 | 46335.5400 |  7015.5750
>  (3 rows)
> 
> stan=> select * from material_cost_sum_view ;
>  proj_no | mtrl_cost 
>  -+---
>45 | 5394.6800
>   764 | 7249.4800
>  7456 | 4007.3000
> (3 rows)
> 
> stan=> select * from expense_report_cost_sum_view ;
>  proj_no | incured_sum | burdened_cost 
>  -+-+---
>45 | 2564.98 |   2564.98
>  7456 | 1747.11 |   1747.11
> (2 rows)
> 
> And here is the clause for creating the summary table that I
> presently have:
> 
> 
> DROP VIEW overall_cost_sum_view ;
> 
> CREATE view overall_cost_sum_view as 
> select 
>   material_cost_sum_view.proj_no as l_proj_vo ,
>   labor_cost_sum_view.proj_no as m_proj_vo , 
>   expense_report_cost_sum_view.proj_no as x_proj_vo , 
>   cast (labor_cost_sum_view.labor_cost_sum as money) as
> l_burdened_cost,
>   cast (material_cost_sum_view.mtrl_cost as money)as
> m_burdened_cost,
>   cast (expense_report_cost_sum_view.burdened_cost as money)as
> x_burdened_cost ,
>   cast (
>   coalesce( labor_cost_sum_view.labor_cost_sum,
> 0) 
>   +
>   coalesce(material_cost_sum_view.mtrl_cost, 0)
>   +
>   coalesce(expense_report_cost_sum_view.burdene
> d_cost, 0)
>   as money)  as ttl_cost
> from 
>   labor_cost_sum_view 
> full joinmaterial_cost_sum_view  on
>   material_cost_sum_view.proj_no = labor_cost_sum_view.proj_no
> full joinexpense_report_cost_sum_view  on
>   expense_report_cost_sum_view.proj_no =
> labor_cost_sum_view.proj_no
> ;
> 
> Which results in the following:
> 
> stan=> select * from overall_cost_sum_view ;
> ---+---+---+-+---
> --+ -+
> 45 |45 |45 |   $3,133.18
> |   $5,394.68 |   $ 2,564.98 | $11,092.84
>764 |   764 |   | $810.75
> |   $7,249.48 |  |  $8,060.23
>|   789 |   |   $7,015.58
> | |  |  $7,015.58
>   7456
> |   |   | |   $4,007.30
> |  |  $4,007.30
>|   |  7456
> | | |   $ 1,747.11 |  $1,747.11
> (5 rows)
> 
> 
> As you can see this statement seems to work correctly on the join of
> the
> labor and material costs, but fails when I add the expense report
> stream.
> 
> What am I doing wrong here?
> 


Your view assumes that all three "streams" contain all the proj_no's
whereas your test data for expense_report_cost_sum_view has no proj_no
= 764.

How do you know which of the three "streams" contains all proj_no's?

Maybe you should consider the crosstab code so you end up with
something like this;-

proj_no | Labour | Material | Expenses

  45| 10810  |   5394   |   2564
 764|  8712  |   7249   |  0
 789| 46335  |   4007   |   1747
 







Re: FW: Undelivered Mail Returned to Sender

2019-08-10 Thread rob stone
Hello,

On Sat, 2019-08-10 at 14:53 -0400, stan wrote:
> I apologize for asking, what I suspect will turn out to be a newbie
> question, but I have managed to get myself quite confused on this.
> 
> I am defining a view as follows
> 
> 
> CREATE OR REPLACE view purchase_view as 
> select 
>   project.proj_no ,
>   qty ,
>   mfg_part.mfg_part_no ,
>   mfg.name as m_name ,
>   mfg_part.descrip as description ,
>   ( 
>   SELECT 
>   name
>   FROM
>   vendor
>   WHERE
>   bom_item.vendor_key =
>   (
>   SELECT
>   vendor_key
>   FROM
>   mfg_vendor_relationship
>   WHERE
>   bom_item.mfg_key = mfg_key
>   AND
>   prefered = TRUE
>   AND
>   bom_item.project_key = project_key
>   
>   )
>   ) 
>   as v_name ,
>   /*
>   vendor.name as v_name ,
>   */
>   cost_per_unit ,
>   costing_unit.unit,
>   need_date ,
>   order_date ,
>   recieved_date ,
>   po_no ,
>   po_line_item 
> from 
>   bom_item
> right join project on 
>   project.project_key = bom_item.project_key
> inner join mfg_part on 
>   mfg_part.mfg_part_key = bom_item.mfg_part_key
> inner join vendor on 
>   vendor.vendor_key = bom_item.vendor_key
> inner join costing_unit on 
>   costing_unit.costing_unit_key = bom_item.costing_unit_key
> inner join mfg on 
>   mfg.mfg_key = bom_item.mfg_key 
> WHERE bom_item is NOT NULL  
> ORDER BY 
>   project.proj_no ,
>   mfg_part
>   ;
> 
> Most of the tables are pretty much simple key -> value relationships
> for
> normalization. I can add the create statements to this thread if it
> adds
> clarity.
> 
> The exception is:
> 
> 
> 
> CREATE TABLE mfg_vendor_relationship (
> mfg_vendor_relationship_key_serial integer DEFAULT
> nextval('mfg_vendor_relationship_key_serial')
> PRIMARY KEY ,
> mfg_key   integer NOT NULL,
> vendor_keyinteger NOT NULL,
> project_key   integer NOT NULL,
> prefered  boolean NOT NULL ,
> modtime   timestamptz DEFAULT current_timestamp ,
> FOREIGN KEY (mfg_key) references mfg(mfg_key) ,
> FOREIGN KEY (vendor_key) references vendor(vendor_key) ,
> FOREIGN KEY (project_key) references project(project_key) ,
> CONSTRAINT mfg_vendor_constraint 
> UNIQUE (
>   mfg_key , 
>   vendor_key , 
>   project_key
>   )
> );
> 
> 
> I am down to having a single row in the mfg_vendor_relationship as
> follows:
> 
>  mfg_vendor_relationship_key_serial | mfg_key | vendor_key |
> project_key |
>  prefered |modtime
>  +-++--
> ---+--+---
>164 |   1 |  1
> |   2 |
>t| 2019-08-10 14:21:04.896619-
> 04
> 
> But trying to do a select * from this view returns:
> 
> ERROR:  more than one row returned by a subquery used as an
> expression
> 
> Can someone please enlighten me as to the error of my ways?
> 
> 
> -- 
> "They that would give up essential liberty for temporary safety
> deserve
> neither liberty nor safety."
>   -- Benjamin Franklin
> 
> 
> - End forwarded message -
> 


You are selecting from a table named bom_item, but further down you
have

WHERE bom_item is NOT NULL

Shouldn't that be WHERE bom_item.some_column_name IS NOT NULL?

Cheers,
Rob






Re: Expression of check constraint

2019-07-04 Thread rob stone
Hello,

On Thu, 2019-07-04 at 05:58 +, Dirk Mika wrote:
> Hi,
>  
> if I add the following check constraint to a table:
>  
> ALTER TABLE public.times_places
>ADD CONSTRAINT ck_tp_ratified CHECK
>   (ratified IS NULL OR (ratified IN ('Y', 'N')));
>  
> It becomes the following when describing the table in psql:
>  
> Check constraints:
> "ck_tp_ratified" CHECK (ratified IS NULL OR (ratified::text = ANY
> (ARRAY['Y'::character varying, 'N'::character varying]::text[])))
>  
> The behavior of the check constraint is logically identical and this
> seems plausible to me, but I still wonder why:
> 1.does the expression x in (a, b) become the expression x =
> any(array(a, b)?
> 2.why is the array expression casted so wildly? First to
> character varying and then to text[]?
> 3.The column ratified is of type character varying(1).  Why is it
> casted to text?
>  
> Dirk
>  -- 
> Dirk Mika
> Software Developer
> 
>  


Why don't you define "ratified" as CHAR(1)?

AFAIK, constraint evaluation is based upon the column's underlying data
type.

Cheers,
Robert
 







Re: cache lookup failed for function 1 on fmgr_info_cxt_security, fmgr.c:184

2019-04-22 Thread rob stone



On Tue, 2019-04-23 at 09:15 +0700, Soni wrote:
> I install the contrib module from the postgres repo using yum
> install.
> creating extension run as postgres superuser.
> 
> Thanks,
> Soni
> 

1) Is there a uuid-ossp.control file sitting in the $SHAREDIR/extension
path?

2) Does its name exist in pg_available_extensions?

3) Is there a uuid-ossp.so in the lib path?

My 2 cents.

Cheers
Rob



> On 22/04/2019 21:05, Adrian Klaver wrote:
> > On 4/21/19 9:14 PM, Soni wrote:
> > > Hello All,
> > > 
> > > CentOS Linux release 7.5.1804 (Core)
> > > Postgres 10.7.
> > > 
> > > When "create extension "uuid-ossp"", I got :
> > > 2019-04-22 02:41:53.323 UTC [10305] XX000ERROR:  XX000: cache
> > > lookup 
> > > failed for function 1
> > > 2019-04-22 02:41:53.323 UTC [10305] XX000LOCATION: 
> > > fmgr_info_cxt_security, fmgr.c:184
> > > 2019-04-22 02:41:53.323 UTC [10305] XX000STATEMENT:  create
> > > extension 
> > > "uuid-ossp" ;
> > > I tried reinstall contrib module and even all postgres component,
> > > but 
> > > still no luck.
> > > Try restarting the server also but no luck.
> > > Any idea how to resolve this ?
> > 
> > How and where are you installing Postgres and -contrib from?
> > 
> > Is the user you are running CREATE EXTENSION a superuser?
> > 
> > > Thanks,
> > > Soni
> > > 
> > > 
> > > 
> 
> 





Re: Connection string for Java to connect to PostgreSQL, using client certificates

2019-02-20 Thread rob stone
Hello,

On Thu, 2019-02-21 at 13:10 +0900, s4...@yahoo.co.jp wrote:
> I am having hard time to connect to PostgreSQL server using client
> certificate from within a Java program.
> Any insight would be helpful.
> 
> I can connect to the server using psql command line from a client
> machine(192.168.56.101) (psql -h 192.168.56.102 -U user1 -d testdb)
> [192.168.56.102 is "postgreSERVER" machine)
> //
> successful outcome looks like this:
> psql (9.6.10)
> SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-
> SHA384, bits: 256, compression: off)
> Type "help" for help.
> 
> user1=# 
> //---
> 
> However, I have been unable to connect using a Java connection
> string.
> 
> This is more like ssl/certificate issue, and only tangentially
> related to the postgreSQL, but since I want to make it work using
> Java (running a test program from Eclipse), I am trying my luck here.
> 
> I started by creating a CA, server side key and certificate, and
> client side key and certificate. This I learnt by watching a Youtube
> video (https://www.youtube.com/watch?v=FWK3lR6bSn8).
> 
> For my own memo, I am reproducing the steps to create certificates
> and keys below, copied directly from that youtube:
> 
> After creating those files, I copied the server side files to
> /etc/postgresql/9.6/main/) (I am using Debian, and "data" directory
> seems to be "/etc/postgresql/9.6/main/").
> and the client side files to /home/user1/.postgresql folder. (had to
> created ".postgresql" folder)
> The files were chmodded to 600.
> And when I used psql from a client machine (Debian), I can connect
> happily as I mentioned above.
> 
> Now for the Java test:
> I copied the "client side" files to /home/user1/cert/ (created "cert"
> folder) 
> 
> The files are: 
> postgresql.crt (1)
> postgresql.key (2)
> root.crt (3)
> 
> 
> (1)originally created as "client.crt"
> in 192.168.56.102:/var/lib/CA/client/, and copied as postgresql.crt
> to the client side
> (2)originally created as "client.key"
> in 192.168.56.102:/var/lib/CA/client/, and copied as postgresql.key
> (3)originally created as "rootCA.crt" in 192.168.56.102:/var/lib/CA/,
> and copied as "root.crt"  
> 
> My connection string is:
> 
> Connection c = null;
> Statement st = null;
> 
> 
> try {
>  Class.forName("org.postgresql.Driver");
>  
>  //credit: https://github.com/pgjdbc/pgjdbc/issues/1364
>  String url = "jdbc:postgresql://192.168.56.102:5432/testdb";
> 
>  Properties props = new Properties();
>  props.setProperty("user","user1");
>  props.setProperty("password","");
>  props.setProperty("sslmode","verify-ca");
>
>  props.setProperty("sslrootcert","/home/user1/cert/root.crt");
>
>  props.setProperty("sslcert","/home/user1/cert/postgresql.crt");
>
>  props.setProperty("sslkey","/home/user1/cert/postgresql.key");
>  props.setProperty("loggerLevel","TRACE");
> 
>  c = DriverManager.getConnection(url,props);
>  Statement st = c.createStatement();
>  
>  c.setAutoCommit(false);
>  System.out.println("Opened database successfully");
> 
>  ResultSet rs = stmt.executeQuery( "SELECT * FROM " +
> someTableName );
>  while ( rs.next() ) {
>  ..
> ..
> }
> 
> 
> 
> 
> When I run the code (in Eclipse, in client machine/Debian), I get
> this error:
>  
> org.postgresql.util.PSQLException: Could not read SSL key file
> /home/user1/cert/postgresql.key.
> at
> org.postgresql.ssl.jdbc4.LazyKeyManager.getPrivateKey(LazyKeyManager.
> java:250)
> at
> sun.security.ssl.AbstractKeyManagerWrapper.getPrivateKey(SSLContextIm
> pl.java:1250)
> 
> I googled, and someone suggested I convert the key file to a "der"
> format.
> 
> I tried this:
> user1@192.168.56.101:~/cert$ openssl x509 -outform der -in
> postgresql.key -out postgresql.der
> 
> 
> but then it says,
> unable to load certificate
> 140663292355968:error:0906D06C:PEM routines:PEM_read_bio:no start
> line:../crypto/pem/pem_lib.c:686:Expecting: TRUSTED CERTIFICATE
> user1@192.168.56.101:~/cert$ 
> 
> I guess it expects PEM format. 
> 
> I am stuck. Please help.
> 


Please read chapter 4 documentation:-

https://jdbc.postgresql.org/documentation/head/ssl-client.html

I think you are missing some steps.

HTH,

Robert





Re: Server goes to Recovery Mode when run a SQL

2019-02-13 Thread rob stone



On Wed, 2019-02-13 at 05:32 -0700, PegoraroF10 wrote:
> is a sql FROM VALUES, why do you need tables ? 
> 
> 


I can see that, however:-

ERROR: relation "col_diasaula" does not exist
  Position: 7477
[SQL State: 42P01]

Is that a table, view or a missing CTE?






Re: Server goes to Recovery Mode when run a SQL

2019-02-13 Thread rob stone
Hello Marcos,

On Wed, 2019-02-13 at 04:41 -0700, PegoraroF10 wrote:
> Here is a SQL which will get that recovery mode. You can run it on
> any database because we created it with FROM VALUES, so ...
> But this one is that one which grows and grows memory use until all
> memory and swap space are gone, so problem occurs. That other SQL
> which gives us the same problem but immediatelly we couldn´t replay
> it without our entire database. Even if we extract just that schema
> to a new DB it doesn´t go to recovery mode. We will think a little
> bit more to create something you can test.
> 

Thanks for the revised query.

However, the create table/view scripts are needed as well.

Cheers,
Robert




Re: Server goes to Recovery Mode when run a SQL

2019-02-09 Thread rob stone
Hi,

On Sat, 2019-02-09 at 15:46 -0600, Justin Pryzby wrote:
> Hi,
> 
> 
> 
> "segfault" seems to mean you hit a bug, which we'll want more
> information to
> diagnose.  Could you install debugging symbols ?  Ubuntu calls their
> package
> postgresql-10-dbg or similar.  And start server with coredumps
> enabled, using
> pg_ctl -c -D /var/lib/postgresql/10/main (or similar).  Then trigger
> the query
> and hope to find a core dump in the data directory.  Or possibly
> it'll be
> processed into /var/crash by apport daemon, depending if that's
> running and
> enabled (check /proc/sys/kernel/core_pattern).
> 

I believe there is a bug. I've examined the query Marcos sent with his
first post and I think the parser should have listed some errors and
not tried to run the query.
I'm probably wrong but consider the following.

Around line 33 of the query:-

 tsrange(col_aula.data, (col_aula.data + (col_aula.tempo|| '
minute')::interval)) dia, 0 prevista,
  (extract(EPOCH FROM col_aula.tempo) / 60) minutosassistidos

Assuming column col_aula.tempo is of type INTEGER, is NOT NULL and
let's say contains a value of 60, then it parses as

tsrange(col_aula.data, (col_aula.data + (60' minute')::interval))

which would pull a syntax error.

You cannot extract EPOCH from a column that is of type INTEGER. Another
syntax error.

Down around line 87 onwards there are generate_series without any
parameters, and further dubious usage of EPOCH, as well as DOW.

Not having the table definitions is obviously clouding the analysis.

If there is a bug in the parser, then one of the experts will have to
opine about that.

HTH,
Robert






Re: Server goes to Recovery Mode when run a SQL

2019-02-04 Thread rob stone
Olá Marcos,

On Mon, 2019-02-04 at 08:33 -0700, PegoraroF10 wrote:
> About replication ... Logical Replication with CREATE
> PUBLICATION/SUBSCRIPTION.
> 
> Yes, some DDL commands were ran on that server but none of them were
> related
> with that select.
> Let me explain better. We have a single server with a single database
> on it.
> Each customer has its own schema and connects to it to work
> exclusively on
> that schema. So, sometimes we add some customer or change something
> on an
> old one. But this DDL change we could ran is not related with that
> schema we
> are talking. Schemas can have different structures but that schema
> which
> puts my server on recovery mode was not changed.
> 
> Maybe something related happened some days ago. When we start a new
> customer
> we add a schema, put all their tables on it and it´s ok. Our server
> has
> today 90 schemas and each schema has 100 tables, resulting in 9000
> tables.
> Some days ago we added 5 new customers on same day, so we added 500
> tables.
> Then, when we did that some selects on system tables were very very
> slow and
> that was only solved when we did a REINDEX DATABASE. Even REINDEX
> SYSTEM did
> not solve. Is this problem related with recovery mode of my server ?
> 
> 

I trust that you saved the output from your "create" scripts. I would
find those five log files and see if any errors occurred.

Also, you would have to run them on the "subscriber" server first. So,
if you "diff'd" the log files for "escola-53" (or whatever) from the
publication and the subscriber servers, they should be similar in all
respects? I don't know the answer to that question.

Chapter 31.4 in the doco is interesting reading.

Cheers,
Robert






Re: Server goes to Recovery Mode when run a SQL

2019-02-03 Thread rob stone
Olá Marcos,



> My original query, which worked for some days and then went to
> recovery mode
> was ...

If it was working and then ceased to function, did any of the following
occur:-

1) Postgres version changed?
2) OS version changed?
3) Schema changes affecting the tables/views used in your query?

I assume that you have a development data base. Can you bump up the log
level on that, run the query and see any errors in the log.

Cheers,
Robert





Re: FK Constraint with ON DELETE SET DEFAULT cascading as table owner

2019-02-01 Thread rob stone
Hello,

On Fri, 2019-02-01 at 14:01 +, Brad Leupen wrote:
> Hello,
>  
> We are using RLS on Postgres 11 to implement multi tenancy in our
> application. We have a tenant table whose id matches the tenant’s
> user role. Each table has a tenant_id FKA that defaults to
> “current_user”. All of our foreign key constraints are multipart
> (tenant_id + row_id). So far this works great except when we try to
> clean up FKA references on deletion. Here’s a script that
> demonstrates the issue in an empty database:
>  
>  
> CREATE ROLE tenant1;
>  
> ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL PRIVILEGES ON
> TABLES TO tenant1;
> ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL PRIVILEGES ON
> SEQUENCES TO tenant1;
>  
> CREATE TABLE tenant (
> id TEXT NOT NULL DEFAULT user PRIMARY KEY
> );
>  
> CREATE TABLE foo (
> tenant TEXT REFERENCES tenant NOT NULL DEFAULT user,
> id SERIAL NOT NULL,
> default_bar INT,
> PRIMARY KEY (tenant, id)
> );
>  
> CREATE TABLE bar (
> tenant TEXT REFERENCES tenant NOT NULL DEFAULT user,
> id SERIAL NOT NULL,
> foo_id INT,
> PRIMARY KEY (tenant, id),
> constraint foo FOREIGN KEY (tenant, foo_id) REFERENCES foo
> (tenant, id) ON DELETE CASCADE
> );
>  
> ALTER TABLE foo ADD CONSTRAINT default_bar FOREIGN KEY (tenant,
> default_bar) REFERENCES bar (tenant, id) ON DELETE SET DEFAULT;
>  
> ALTER TABLE foo ENABLE ROW LEVEL SECURITY;
> ALTER TABLE bar ENABLE ROW LEVEL SECURITY;
>
> CREATE POLICY tenant_tenant on tenant USING (id = current_user) WITH
> CHECK (id = current_user);
> CREATE POLICY foo_tenant on foo USING (tenant = current_user) WITH
> CHECK (tenant = current_user);
> CREATE POLICY bar_tenant on bar USING (tenant = current_user) WITH
> CHECK (tenant = current_user);
>  
> SET ROLE tenant1;
>  
> INSERT INTO tenant DEFAULT VALUES;
> INSERT INTO foo DEFAULT VALUES;
> INSERT INTO bar ( foo_id ) (SELECT id FROM foo );
> UPDATE foo SET default_bar = ( SELECT id FROM bar );
> DELETE FROM bar;
>  
> This script winds up failing because the “user” default value on
> foo.tenant evaluates to the table owner, not the active user role of
> “tenant1”. Is this the expected behavior? The desired outcome, after
> deleting from bar, would be for foo’s tenant to remain “tenant1” and
> its default_bar value be set to null. Is there another, cleaner way
> to achieve this?
>  
> Thank you!
> Brad



I ran your script. Output:-

postgres 11.1 => select * from tenant;
   id
-
 tenant1
(1 row)

postgres 11.1 => select * from foo;
 tenant  | id | default_bar 
-++-
 tenant1 |  1 |   1
(1 row)

postgres 11.1 => select * from bar;
 tenant  | id | foo_id 
-++
 tenant1 |  1 |  1
(1 row)

postgres 11.1 => delete from bar;
ERROR:  insert or update on table "foo" violates foreign key constraint
"foo_tenant_fkey"
DETAIL:  Key is not present in table "tenant".
postgres 11.1 => 


Your foreign key constraint is defined as:-

default_bar ==> FOREIGN KEY (tenant, default_bar) REFERENCES
bar(tenant, id) ON DELETE SET DEFAULT

If you don't specify a "default" it uses NULL.
There is no tenant.id that is NULL.
So, the foreign key validation fails.

AFAICT, it is working as intended.

Cheers,
Robert





Re: Creating 2D arrays for pg_copy_from, reading tab-delimted text file that contains comma and double quotes

2018-12-17 Thread rob stone
Hello,

On Mon, 2018-12-17 at 21:34 +0900, s4...@yahoo.co.jp wrote:
> Hello Good People of the Forum!
> 
> I am trying to insert some data into a PostgreSQL database using PHP
> and struggling to create an array so that pg_copy_from function will
> accept and process the data.
> 
> I can insert data but not the way I want- my data this case contains
> comma, space, double quotes and unpaired double quote.
> I need to use Excel to create the data and save it as tab delimited
> text file, and then convert encoding to UTF-8 (from Excel's ANSI)
> before passing it to the PHP. 
> 
> Here is my dummy data in Excel:
> rec_noitem1item2item3item4item5
> 1Denny'sorange juice1,500 yen"Dear John"32" TV
> 
> Explanation: the first row is header.
> The second row is data for each column.
> I tried to create variation like apostrophe, space between words,
> comma, double quotes and unpaired double quote (32" TV).
> 
> When I save that Excel as a tab delimited text file, I get this:
> rec_noitem1item2item3item4item5
> 1Denny'sorange juice"1,500 yen""""Dear John"""   
> "32"" TV"
> (As seen when I opened that file with Notepad)
> 
> Because my data also contains non-ascii, I saved the file with UTF-8
> encoding using the Notepad.
> 
> Then I created a two-dimensional array with PHP:
> 
> 1.$file = 'test.txt'; //tab delimited file 
> 2.$fileRead = fopen($file, 'r');
> 
> 3.$row = 1;
> 4.$twoDarray = array();
> 5.while (($line = fgetcsv($fileRead, 0, "\t")) !== FALSE) { //0 means
> I can read row whatever its length
> 6.if($row == 1){ $row++; continue; } //skip header
> 7.$line = implode(" ",$line). "\n";
> 8.$twoDarray[] = $line;
> 9.}
> 10.fclose($fileRead);
> 
> Then I passed that twoDarray to pg_copy_from.
> 
> $con=pg_connect("host=$host dbname=$dbname port=5432 user=$user
> password=$password");
> 
> 11.if (!$con) {
> 12.die("Couldn't open..\n");
> 13.}
> 
> 14.if (pg_copy_from($con, $tableName, $twoDarray) !== FALSE) {
> 15.print "OK!";
> 16.}
> 17.else{
> 18.print "Not OK.";
> 19.}
> 
> When I run the program, I have this error:
> Warning: pg_copy_from(): Copy command failed: 
> ERROR: value too long for type character varying(32) CONTEXT: COPY
> test_table, line 1, column rec_no: "1 Denny's orange juice 1,500 yen
> "Dear John" 32" TV" in testProgram.php line xx.
> 
> My table definition is:
> CREATE TABLE test_table (
> rec_no VARCHAR(32) PRIMARY KEY NOT NULL,
> item1 VARCHAR(255),..item2 .. until item5.);
> 
> Obviously, my program thinks everything in the data row is for the
> first field.
> No, no.
> 
> How to make it think that 
> 1 is for the 'rec_no' field,
> Denny's is for the 'item1' field, 
> orange juice is for the 'item2' field,
> 1,500 yen is for the 'item3' field,
> "Dear John" is for the 'item4' field and 
> 32" TV is for the 'item5' field?
> 
> When I tried removing '0' from line 5, that is, 
> while (($line = fgetcsv($fileRead, "\t")) !== FALSE) { //without read
> length
> 
> I can see data written in the database, but with some extra double
> quotes and a missing comma!
> That is data was saved as
> Denny's,  orange juice,   "1 500 yen","""Dear John""", and 
>   "32"" TV"
> into the respective fields.
> I cannot have those extra double quotes, and I cannot have missing
> comma in my data. 



fgetscsv returns an array from reading a record from a text file.
So  $line = fgetcsv($fileRead, 0, "\t", '"') would use tab as the
delimiter and remove any enclosure character equal to ".

I don't use Excel. With Libreoffice you can set the delimiter to the
pipe character and tell it not to use enclosures. So I don't know how
to massage your Excel file so that 32" TV is rendered correctly. 

Secondly, in PHP implode takes an array and turns it into a string with
a specified character used to delimit the values from the array.

I can only suggest that you read the PHP manual, as well as do some
searches for the use of pg_copy_from, although I doubt anything useful
will turn up. 

Cheers,
Rob




Re: Importing tab delimited text file using phpPgAdmin 5.1 GUI

2018-12-06 Thread rob stone
Hello,

On Thu, 2018-12-06 at 21:29 -0800, Adrian Klaver wrote:
> On 12/6/18 7:54 PM, s4...@yahoo.co.jp wrote:
> 
> Most GUI tools I am familiar with quote identifiers by default.
> > How/where can I tell the phpPgAdmin not to add that extra "" around
> > the 
> > field name?
> 
> I don't know. I have been under the impression that phpPgAdmin was
> no 
> longer maintained/in use.
> 
> You might have more luck here:
> 
> https://sourceforge.net/p/phppgadmin/discussion/115884
> 
> > 
> > Thanks for reading and suggestions.
> 
> 

If you look at phppgadmin on sourceforge, the tarball files are all
dated April 15th., 2013.
It appears to have stalled at Postgres version 9.2.
I doubt if it will run on versions 10 or 11.

Cheers,
Rob







Running pg_upgrade Version 11

2018-11-05 Thread rob stone
Hello,

Trying to run pg_upgrade from version 10 to version 11 on the test
server and pulling the following error:-

could not open version file: /home/postgres/testing/data_v10/PG_VERSION
Failure, exiting

O/S is:- Debian 4.18.10-2 (2018-11-02) x86_64 GNU/Linux

Running pg_upgrade as user postgres.

These are the permissions on the version 10 file:-

-rw--- 1 postgres postgres 3 Mar 23  2018 PG_VERSION

These are the permissions on the version 11 file:-

-rw--- 1 postgres postgres 3 Nov  6 14:50 PG_VERSION

This is the command being run:-

/usr/lib/postgresql/11/bin/pg_upgrade -b /usr/lib/postgresql/10/bin -B
/usr/lib/postgresql/11/bin -d /home/postgres/testing/data_v10 -D
/home/postgres/testing/data_v11


Logged in as user postgres and postgres owns the files created by
initdb, so is this a permissions problem or am I having a brain fade?


TIA,
Rob






Re: PostgreSQl, PHP and IIS

2018-09-24 Thread rob stone
Hi

On Mon, 2018-09-24 at 10:46 +0100, Mark Williams wrote:
> Hi thanks,
> 
> Sadly PHP and IIS isn't the issue. It’s when you bring Postgres into
> the fold that I am getting a problem. But thanks.
> 
> Mark
> 
> 
I'm not a Windows user and no little about setting up Postgres using
IIS, but you might find something useful from users' comments at this
link:-

http://www.php.net/manual/en/pgsql.setup.php


Cheers,
Rob






Re: Which background task looks for pg_xlog in 10?

2018-08-28 Thread rob stone
Hello

On Tue, 2018-08-28 at 08:32 +0200, Johann Spies wrote:
> I see this in /var/log/postgresql/postgresql-10-main.log:
> 
> postgres postgres@template1 ERROR:  could not open directory
> "pg_xlog": No such file or directory
> 
> postgres postgres@template1 STATEMENT:  SELECT count(*) AS segments
> FROM pg_ls_dir('pg_xlog') t(fn) \
> WHERE fn ~ '^[0-9A-Z]{24}$'
> 
> Where does this come from?
> 
> Regards
> Johann
> --
> Because experiencing your loyal love is better than life itself,
> my lips will praise you.  (Psalm 63:3)
> 

There is no directory pg_xlog in V10.

You might have some incompatible software somewhere or an application
is running this query.

Why does it say "postgres@template1?

Cheers,
Rob




Re: Uncaught PHP Exception Doctrine\DBAL\Exception\UniqueConstraintViolationException: "An exception occurred while executing 'UPDATE

2018-08-14 Thread rob stone
Hello,

On Tue, 2018-08-14 at 07:48 +, Jarosław Torbicki wrote:
> Hello,   
> I used PostgreSQL 9.3 but I executed upgrade few days ago.
> Now, I am using 10.4 PostgreSQL and:
> doctrine/annotations v1.2.7
> doctrine/cache   v1.4.2  
> doctrine/collections v1.3.0
> doctrine/common  v2.7.3
> doctrine/dbalv2.5.13
> doctrine/doctrine-bundle v1.5.2 
> doctrine/doctrine-cache-bundle   v1.0.1
> doctrine/inflector   v1.0.1
> doctrine/instantiator1.0.5 
> doctrine/lexer   v1.0.1 
> doctrine/orm v2.5.14
>  
>  
> I have a problem with ManyToOne relation.
> For example, I have main object with three child and when I execute
> on main object
> $em = $this->getDoctrine()->getManager();
> $em->merge($data);
> $em->flush();
> I sometimes get ERROR message like:
> Uncaught PHP Exception
> Doctrine\DBAL\Exception\UniqueConstraintViolationException: "An
> exception occurred while executing 'UPDATE
>  
> I get this ERRROR message not for all main object and not for all
> child. For example, first update child object is ok but in second I
> get error.
>  
> SQL prepared by doctrine:
> UPDATE child_table SET id = ?, name = ?, object_name = ?, object_size
> = ? WHERE id = ?' with params ["2", "test Name object 2", "test name
> object 2", "1234", 3]
>  
> In this sql the doctrine tries update object with id=3 using data
> from object with id = 2.
>  
> This problem didn’t occur before executing upgrade to 10.4 version.
>  
> Can you help me and give some tips?
>  
>  
> Pozdrawiam,
> __
> Jarosław Torbicki
> Analityk
>  


I haven't a clue what "doctrine" does but your update statement makes
no sense. It effectively is doing this based on your list of params:-

UPDATE child_table SET id = 2, name = 'test Name object 2', object_name
= 'test Name object 2', object_size = 1234 WHERE id = 3;

Why would you change the value of the id column?
If the column "id" is a primary key, you'll no doubt pull a constraint
violation.

Maybe altering your postgresl.conf file to log all statements you might
be able to follow the sequence of statements until the constraint
violation occurs and thus make the necessary changes to the "doctrine"
code.

HTH,
Rob 




Re: Postgresql 10.4 installation issues on Ubuntu 14.05

2018-07-30 Thread rob stone
Hello,

On Mon, 2018-07-30 at 09:01 +0300, vardenis pavardenis wrote:
> 
> tadas@ubuntu:~$ dpkg -l | grep postgres
> ii  pgdg-keyring  
> 2017.3   
>  
>  
> all  keyring for apt.postgresql.org
> iU  postgresql
> 10+191.pgdg14.04+1   
>  
>  
> all  object-relational SQL database (supported version)
> iF  postgresql-10  10.4-
> 2.pgdg14.04+1
>  
>  amd64   
> object-relational SQL database, version 10 server
> ii  postgresql-client-10   10.4-
> 2.pgdg14.04+1
>  
>  amd64   
> front-end programs for PostgreSQL 10
> ii  postgresql-client-common  
> 191.pgdg14.04+1  
>  
>  
> all  manager for multiple PostgreSQL client versions
> ii  postgresql-common 
> 191.pgdg14.04+1  
>  
>  
> all  PostgreSQL database-cluster manager
> tadas@ubuntu:~$
> 
> 

Your installation has not completed successfully.

iU  postgresql 10+191.pgdg14.04+1  --- the letter "U" means the package
has only been un-packed.

iF  postgresql-10   10.4-2.pgdg14.04+1  -- the letter "F" means the
package is only half-configured.


HTH,
Robert




Re: cursor "x" does not exist

2018-04-15 Thread rob stone
Hello Bob,

On Sat, 2018-04-14 at 15:44 +0100, Bob Jones wrote:
> Hello,
> 
> 
> 
> P.S. As a side-question, if anyone here has experience in using
> Postgres as a backend to PHP, are refcursors the way to go or should
> I
> be thinking of SETOF or other return styles ?
> 


If you have a business requirement along the lines of "display all
outstanding orders for a customer" it is best to create a view that
serves that purpose. Then all you need to code is a "select * from
v_outstanding_orders where customer_id = $1";
If there are any outstanding orders you can page through the
associative array returned by the query in order to display the
results.

HTH,
Rob





Re: Extract dates of a given day

2018-04-06 Thread rob stone
Hello,

On Fri, 2018-04-06 at 11:32 +0100, hmidi slim wrote:
> Hi,
> I have a table availability: id (integer), product varchar(255),
> period (daterange)
> I want to extract dates from a given period matching a given day. 
> E.g:  for the period from 01/04/2018 - 30/04/2018 and for day =
> Monday I want to get 
> 02/04/2018
> 09/04/2018
> 16/04/2018
> 23/04/2018
> 30/04/2018
> 
> I want to make a query such as (the query doesn't work ) but I want
> to extract dates from daterange type:
> select * from availability
> where period @> '[2018-04-02, 2018-04-20]'
> and extract(dow from period_date) = 1
> 
> How can I extract dates from daterange?


There is no column called "period_date".

You might try:-

SELECT id, product, period
FROM availability
WHERE period @> '[2018-04-02, 2018-04-09, 2018-04-16, 2018-04-23, 2018-
04-30)'::daterange
ORDER BY product;

Not tested. Not sure exactly what your application needs.

HTH,
Rob



Re: Foreign Key locking / deadlock issue.... v2

2018-03-24 Thread rob stone
Hello Phil,

On Fri, 2018-03-23 at 15:42 +, HORDER Phil wrote:
> Rob, thanks for looking.
>  
> The "pause" is only to not-do-the-commit yet, so that the child
> process can then try and access the record - I've not left anything
> out.
> This code is my own demo, not a cut from our production code.
>  
> Did you run this as the 'postgres' superuser?  That would bypass the
> RLS, and probably avoid the problem.

No. Never use super user to test scripts.

>  
> I checked by creating a new user, and ran my code in that:
>  
> Sql> create user test password 'password';
>  
> After running my test script, psql \dp shows:
>  
> Schema |   Name| Type  |Access privileges   
> | Column privileges | Policies
> +---+---+--
> ---+---+--
> public | eln   | table |
> |   |
> public | pl| table |
> |   | security_policy:+
> |   |   |  
>   |   |   (u): true +
> |   |   |
> |   |   (c): true
>  
> (plus some other stuff for postGIS)
>  
> Here’s my code again:
>  
> drop table if exists ELN;
> drop table if exists PL;
>  
> Create table PL
> (pl_id integer,
> m_number text
> );
>  
> alter table PL ENABLE row level security;
> alter table PL FORCE row level security;
>  
> drop policy if exists security_policy on PL ;
> CREATE POLICY security_policy on PL FOR ALL TO PUBLIC USING (true)
> WITH CHECK (true);
>  
> Alter table PL add constraint PL_PK primary key (pl_id);
> Insert into PL values (1, null);
> Insert into PL values (2, null);
> Insert into PL values (3, null);
>  
> Create table ELN
> (event_id integer,
> pl_id integer
> );
>  
>  
> Alter table ELN add constraint ELN_PK primary key (event_id);
> Alter table ELN add constraint ELN_PL_FK foreign key (pl_id)
> references PL (pl_id);
>  
> Insert into ELN values (301, null);
> Insert into ELN values (302, null);
> Insert into ELN values (303, null);
>  
> commit;
>  
> -- process 1:
> start transaction;
> update pl set m_number = '234' where pl_id = 2;   
> update pl set m_number = '345' where pl_id = 3;  
>  
> -- wait here, and run process 2

This was never explained. You are running "process 2" in an entirely
different session

If that is true, why don't you commit the updates to table pl and
release the locks?
The fact that eln.pl_id is a foreign key is irrelevant. If you tried to
alter eln.pl_id to a value that doesn't exist in pl.pl_id will cause an
exception to occur due to the constraint.

Cheers,
Rob

> commit;
>  
>  
> -- process 2:
> start transaction;
> update eln set pl_id = 3 where event_id = 303; 
> update eln set pl_id = 2 where event_id = 302;
> -- Gets blocked by process 1
> commit;
>  
>  
> Phil Horder
> Database Mechanic
>  
> -Original Message-
> From: rob stone [mailto:floripa...@gmail.com] 
> Sent: 23 March 2018 11:43
> To: HORDER Phil; pgsql-general
> Subject: Re: Foreign Key locking / deadlock issue v2
>  
> Hello Phil,
>  
> I've run your sample script on 9.6.5 and 10.3.
> The only thing that I added was a commit; after the initial inserts
> just to ensure the rows were saved.
> No errors were reported for either version.
>  
> The output of \dp after running was:-
>  
> Access privileges  Schema | Name |
> Type  | Access privileges | Column privileges
> | Policies
> +--+---+---+---+-
> --
> ---
> public | eln  | table |   |   |
>  public | pl   | table |   |   |
> security_policy:+
> |  |   |   |   |  
> (u):
> true
>  
>  
> --> including the FOR ALL in the create policy statement as well as
> WITH CHECK(true).
>  
>Access privileges  Schema | Name |
> Type  | Access privileges | Column privileges
> | Policies
> +--+---+---+---+-
> --
> ---
> public | eln  | table |   |   |
>  public | pl   | table |   |   |
> security_policy:+
> |  |   |   |   |  
> 

Re: Foreign Key locking / deadlock issue.... v2

2018-03-23 Thread rob stone
Hello Phil,

I've run your sample script on 9.6.5 and 10.3.
The only thing that I added was a commit; after the initial inserts
just to ensure the rows were saved.
No errors were reported for either version.

The output of \dp after running was:-

Access privileges
 Schema | Name | Type  | Access privileges | Column privileges
| Policies 
+--+---+---+---+---
---
 public | eln  | table |   |   | 
 public | pl   | table |   |   |
security_policy:+
|  |   |   |   |   (u):
true


--> including the FOR ALL in the create policy statement as well as
WITH CHECK(true).

   Access privileges
 Schema | Name | Type  | Access privileges | Column privileges
| Policies 
+--+---+---+---+---
---
 public | eln  | table |   |   | 
 public | pl   | table |   |   |
security_policy:+
|  |   |   |   |   (u):
true +
|  |   |   |   |   (c):
true


The only mystery is what happens here:-



-- …. Pause while other processing happens …..
(commit;)
 
 
-- Child table processing – occurs often & quickly. Starts after parent
update.

<\snip>


I'd like to know more about RLS and trying to de-bug your script.

On a production application you'd be testing for errors and raising
exceptions so as to inform users that a problem occurred.

So, without knowing what occurs during "Pause while other processing
happens" I can't help any further.

Cheers,
Rob




Re: Foreign Key locking / deadlock issue.... v2

2018-03-22 Thread rob stone
Hello Phil,

On Wed, 2018-03-21 at 15:26 +, HORDER Phil wrote:
> OK,
> Let's try again, with a full script, and including the bit that makes
> the difference…
>  
> Hi,
> I’m trying to understand why I’m getting a deadlock issue, and how to
> work around it.
>  
> At base, I think the problem is:
> 1.Updates to a parent table are creating row level write
> locks.
> 2.updates to a child table set the foreign key value to
> the parent table, which are then blocked – because there is Row Level
> Security on the parent table.
>  
>  
>  
> -- The setup
> 
>  
> drop table if exists ELN;
> drop table if exists PL;
>  
> Create table PL
> (pl_id integer,
> m_number text
> );
>  
>  
> alter table PL ENABLE row level security;
> alter table PL FORCE row level security;
>  
> -- A dummy security policy, that allows everybody access to
> everything
> drop policy if exists security_policy on PL ;

I don't use row level security but after reading the documentation, I'd
alter this:-

> create policy security_policy on PL TO public using (true);   -- (1)

to

CREATE POLICY security_policy on PL FOR ALL TO PUBLIC USING (true) WITH
CHECK (true);

and trying again.

HTH,
Rob




Re: Foreign Key locking / deadlock issue.

2018-03-21 Thread rob stone
Hello Phil,


On Tue, 2018-03-20 at 11:46 +, HORDER Phil wrote:
> Hi,
> I’m trying to understand why I’m getting a deadlock issue, and how to
> work around it.
>  
> At base, I think the problem is:
> 1.   Updates to a parent table are creating row level write
> locks,
> 2.   updates to a child table set the foreign key value to the
> parent table, which are then blocked.
>  
> While investigating, I found the row locking documentation, which
> says that I can request read locks that don’t block.
> But my sample code still gets blocked.
> https://www.postgresql.org/docs/9.6/static/explicit-locking.html
>  
> Can someone explain what I’m missing please?
>  
> parent process
> ---
>  
> start transaction;
> select * from pl where pl_id in (2,3) for no key update of pl;  (1)
> update pl set m_number = '234' where pl_id = 2; (2)
> update pl set m_number = '345' where pl_id = 3; (3)
>  
> child process
> ---
>  
> start transaction;
> select pl_id from pl where pl_id in (2,3) for key share of pl;  (4)
> update eln set pl_id = 3 where event_id = 303;  (5)
> update eln set pl_id = 2 where event_id = 302;  (6)
>  
>  
> My Parent process inserts and updates on the PL table, but never
> changes the key value.
> My Child process inserts and updates on the ELN table, and can set
> the FK reference value to the PL table.
>  
> I can understand that the default lock on the PL update will block
> the foreign key check from the ELN table.
> Why does this example still get blocked?
>  
> Is there a way around this without dropping the foreign key?
>  
> (And if so…. How can I get this to work in Spring Data / JPA?)
>  
>  
> Phil Horder
> Database Mechanic
>  
> Thales
> Land and Air Systems
> Horizon House, Throop Road, Templecombe, Somerset, BA8 0DH, UK
>  


The details are a bit sparse. I don't understand "why" you are using
"no key update" or "for key share".

Anyway, if the business rules dictate that table pl and table eln
should be updated in a single transaction, the way I'd do this would be
to create a method or a sub-routine where three arguments are specified
being the connection handle, the values of p_id, m_number and event_id
to be used. Then:-

BEGIN;
SELECT m_number FROM pl WHERE pl_id = $2 FOR UPDATE;
UPDATE pl SET m_number = $3 WHERE pl_id = $2;
Test for success.
If successful, then:-
SELECT pl_id FROM eln WHERE event_id = $4 FOR UPDATE;
UPDATE eln SET pl_id = $2 WHERE event_id = $4;
If successful then COMMIT;

If either of the updates fail, ROLLBACK; and inform the user that the
transaction failed.

If you are using Java then remember that auto-commit is usually set
"on" by default.

Doing a SELECT . . . FOR UPDATE on a single row just locks that row and
if another session also tries to do the same thing, then it just waits
until it times out.

You haven't said if this code is being generated dynamically by the
application. 

HTH,
Rob



Re: Snapshot recovery or rolling back commited

2018-03-15 Thread rob stone
Hello Marc-Antoine,

On Thu, 2018-03-15 at 10:43 +0100, Marc-Antoine Nüssli wrote:
> Hi there,
> 
> I saw there was a question about a similar topic recently but my use
> case is quite different so there may be a different answer.
> Roughly, I have a database which is updated by a single stream of
> updates (through jdbc), so I have a single write transaction at any
> time. However, sometimes I need to cancel some of the last updates in
> order to re-apply different updates, but the readers should always be
> able to query the most up-to-date state, including updates that could
> potentially be cancelled afteward.
> In other words, I need to be able, at any time, to rollback the last
> updates up to a certain point (which is moving but always known) and
> to be able to query the most up-to-date state (including updates that
> could be rollbacked later) 
> Putting differently, I need two version of the same database,
> "consistent" and "latest", with the "consistent" version being some
> updates behind latest (or sometimes at the same state) and sometimes
> the "latest" version must be restored back to the "consistent"
> version.
> 
> An approach would be to have a single transaction for the updates
> with a "moving" savepoint, so that we can always rollback to the last
> "correct" state. But, as far as I know, there is no way to query the
> updated snapshot of an uncommitted transaction outside of it. 
> Indeed, we cannot do READ UNCOMMITED transaction and exporting
> transaction snapshot  does not show the updates made by the original
> trnasaction.
> The more I think and read about this, the more it seems this cannot
> be achieved within a MVCC architecture...
> 
> I also thought of a different approach that would use PITR and WAL
> features, but as far as I read about it, this would require a lot of
> filesytem-level scripting as well as as restarting postgres each time
> we want to restore to a previous state. This sounds quite difficult
> to accomplish all of this automatically and ot very efficient for a
> production system.
> 
> The last solution I thought of is to use an audit history (such as in
> https://wiki.postgresql.org/wiki/Audit_trigger_91plus). Then I can
> use it to rollback the last changes. And as I know up to which point
> I want to restore, I can periodically clean the history to keep only
> the last required changes. Currently, this is the only solution that
> seems doable to mebut I'm not 100% sure that it would capture all
> possible cases and that in some situations, restoring from the audit
> history will not restore to the exact same state that it was...
> 
> So, my question is do you think any of the proposed approaches is
> doable? And if yes, which one and how?
> Otherwise, do you have any thought on how to accomplish such a kind
> of use-case using postgres?
> 
> Thanks in advance for your response!
> 
> Best regards,
> Marc-Antoine Nüssli



Without seeing your schema but assuming that this stream of updates
only affects a single table, have you considered adding a column type
boolean to that table default value true, and assuming that it could be
included in a surrogate primary key, when an "update" is to be
processed, lock that row, update the boolean to false and then insert a
new row?

You could then schedule a cron job to physically delete all rows where
the boolean is false and if required, insert them into an archive table
just in case you need to review the history.

This way, your application is useable 24/7.

Just a thought.

Cheers,
Rob



Re: JDBC4 and setting statement_timeout: responds "is not yet implemented"

2018-03-05 Thread rob stone
Hi Gunnar,

On Mon, 2018-03-05 at 12:27 +0100, Gunnar Halvorsen wrote:
> Dear Sirs,
> As quite new in this discussiongroup I have red a lot about
> statement_timeout, but I can't find any answer on the following
> problem:
> 
>  My JavaEE-code responds the fpllowing Error message:
> 
>  Method
> org.postgresql.jdbc4.Jdbc4PreparedStatement.setQueryTimeout(int) is
> not yet implemented
> 
> Javacode:
> PreparedStatement AA = null;
> ..
> ..
> AA.setQueryTimeout( 5000 );  // This sentence generates the fault-
> message
> 
> 
> PostgreSQL version: 9.1.24
> ( Only READ-access )
> 
> JDBC4 version:   postgresql-9.0-801.jdbc4.jar
> 
> 
> My application reads contiously a SELECT-sentence every 10 seconds,
> and needs to be aborted if it not returns in during of 5 seconds.
> 
> Is the functiion setQueryTimeout(INT) not yet implemented in JDBC4,
> or is it the PostgreSQK-server which is not yet implemented?
> 
> How shall I implement statement_timeout in my JavaEE code, when I
> only has Read-access to PostgreSQL, to avoid this ErrorMessage?
> 
> Regards
> -Gunnar
> 
>  
> 
> 
> 


There was a fix to this in Version 9.4-1205 of the driver.
Also, your version of PostgreSql is ancient.

I think an upgrade would be worthwhile.

HTH,
Robert



Re: weird result by changing type enum array to text array of a column

2018-02-05 Thread rob stone


On Mon, 2018-02-05 at 11:26 +0100, Thomas Poty wrote:
> Hello,
> 
> I m running 9.5.10 on centos 7.
> 
> I have a colmun "TestFields", its type is an array of an enum.
> 
> I would like to change the type of this column by an array of text.
> 
> So i execute  -> alter table "controldetailpromoters" alter column
> "TestFields" type text[]
> The command seems to be correctly executed
>  but  when I execute a "\d" of my table, the column "TestFields" is
> always an array of the enum
> 
> 
> Do you have any idea of what is going wrong?
> 
> Thank you.
> Thomas


Hello Thomas,

See chapter 8.7.

Your array "TestFields" contains the internal enum references that are
converted to the defined values by cross referencing to the catalogue
table pg_enum.
I think you will have to add another column to the table as an array of
text, write a little program to cross reference the TestFields array to
pg_enum and update the new column with the textual representation of
the "enum's", then alter your app accordingly.

HTH,
Rob



Re: What generates pg_config.h?

2018-01-05 Thread rob stone
Hello,

On Fri, 2018-01-05 at 12:19 -0800, Adrian Klaver wrote:
> On 01/05/2018 11:57 AM, Travis Allison wrote:
> > Distro and version:  Ubuntu 16.04 LTS.
> > 
> > If memory serves, I installed 9.5 using synaptic and 9.6 using apt-
> > get, 
> > where I followed the instructions from this site: 
> > https://www.postgresql.org/download/linux/ubuntu/
> > 
> > I also have an anaconda installation of postgresql, but that is a 
> > version less than 9.6.
> > 
> > Here are the results for postgresql using dpkg-query -l
> 
> Well somehow a Postgres 10 version of pg_config.h got installed.
> Is there an instance of Postgres 10 running?
> 
> > 
> > ii  postgresql 9.5+173
> >  all 
> >   object-relational SQL database (supported
> > version)
> > ii  postgresql-9.5 9.5.3-
> > 0ubuntu0.16.04 
> > amd64object-relational SQL database, version
> > 9.5 server
> > ii  postgresql-9.6 9.6.5-1.pgdg16.04+2 
> >   amd64object-relational SQL database, version
> > 9.6 
> > server
> > ii  postgresql-client-9.5  9.5.3-
> > 0ubuntu0.16.04 
> > amd64front-end programs for PostgreSQL 9.5
> > ii  postgresql-client-9.6  9.6.5-1.pgdg16.04+2 
> >   amd64front-end programs for PostgreSQL 9.6
> > ii  postgresql-client-common   173
> >  all 
> >   manager for multiple PostgreSQL client
> > versions
> > ii  postgresql-common  173
> >  all 
> >   PostgreSQL database-cluster manager
> > ii  postgresql-contrib-9.5 9.5.3-
> > 0ubuntu0.16.04 
> > amd64additional facilities for PostgreSQL
> > ii  postgresql-contrib-9.6 9.6.5-1.pgdg16.04+2 
> >   amd64additional facilities for PostgreSQL
> > ii  postgresql-server-dev-9.5  9.5.3-
> > 0ubuntu0.16.04 
> > amd64development files for PostgreSQL 9.5 
> > server-side programming
> > ii  postgresql-server-dev-9.6  9.6.5-1.pgdg16.04+2 
> >   amd64development files for PostgreSQL 9.6 
> > server-side programming
> > 
> > 
> 
> 
> 


Shouldn't the OP check the libpq-dev libraries as well?

Not an expert. My two cents.
Rob



Re: Does PostgreSQL check database integrity at startup?

2017-12-26 Thread rob stone
Hello,


On Tue, 2017-12-26 at 18:58 -0300, Alvaro Herrera wrote:Hello,
> David Steele wrote:
> 
> > pgBackRest will validate all page checksums (including indexes,
> > etc.) in the
> > cluster during backup.  Full backups check everything,
> > incr/differential
> > backups check only the files that have changed.
> 
> If a table or index file is of zero length when backed up, as in the
> described case, nothing will be checked, right?  I mean, there is
> nothing externally indicating that the file ought to be of a
> different
> size.  Am I wrong?  So Edson's situation here would not raise any red
> flags.
> 


Could the following occur:-
1) Your app. issues a BEGIN followed by an INSERT.
2) Postgres decides to open a new file in order to store the new row.
3) Your app. then does a ROLLBACK.

Wouldn't that leave you with a zero length file on disk?

There's no reason for Postgres to delete the file just because a
rollback was issued. All it has to do is clear the buffer in memory.

My 2 cents.

Rob 



Re: ISO8601 vs POSIX offset clarification

2017-12-04 Thread rob stone


On Mon, 2017-12-04 at 14:03 +1100, Bharanee Rathna wrote:
> To be more specific, I expected the output of both these queries to
> be the same.
> 
> # select '2017-12-01 11:00:00 +11:00'::timestamp with time zone at
> time zone '+11:00';
>   timezone   
> -
>  2017-11-30 13:00:00
> 
> # select '2017-12-01 11:00:00 +11:00'::timestamp with time zone at
> time zone 'Australia/Melbourne';
>   timezone   
> -
>  2017-12-01 11:00:00
> 
> Cheers
> 


select '2017-12-01 11:00:00 +11:00'::timestamp with time zone at time
zone INTERVAL '+11:00';

Result:-

2017-12-01 11:00:00.0


You need the INTERVAL keyword when using a numeric value instead of a
time zone name. It's in the doco.

The parser ought to throw an error, but it doesn't.