Re: append jsonb array to another jsonb array

2021-02-17 Thread Joao Miguel Ferreira
Hi, David, Pavel,

Thank you for your indications

Joao


On Wed, Feb 17, 2021 at 7:50 PM Pavel Stehule 
wrote:

> Hi
>
> st 17. 2. 2021 v 20:44 odesílatel Joao Miguel Ferreira <
> joao.miguel.c.ferre...@gmail.com> napsal:
>
>> Hello all,
>>
>> I PL/pgSQL I have a few jsonb variables (instantiated each with an array
>> of objects) and I need to append them to produce the final result. All of
>> them are  small in size (max 30 each). We are using Pg 11.
>>
>> Example:
>> a = [ x, y ]
>> b = [ z , w ]
>> result would be [ x, y, z, w ]
>>
>> What would you consider to be a suitable approach ?
>>
>
>  postgres=# select '[10,20,30]'::jsonb || '[100,20]'::jsonb;
> ┌───┐
> │   ?column?│
> ╞═══╡
> │ [10, 20, 30, 100, 20] │
> └───┘
> (1 row)
>
> Regards
>
> Pavel
>
>
>> thanks
>> João
>>
>>
>>


Re: append jsonb array to another jsonb array

2021-02-17 Thread David G. Johnston
On Wednesday, February 17, 2021, Joao Miguel Ferreira <
joao.miguel.c.ferre...@gmail.com> wrote:

> We are using Pg 11.
>
> Example:
> a = [ x, y ]
> b = [ z , w ]
> result would be [ x, y, z, w ]
>
> What would you consider to be a suitable approach ?
>

That’s called concatenation.

 https://www.postgresql.org/docs/11/functions-json.html

David J.


Re: Slow index creation

2021-02-17 Thread hubert depesz lubaczewski
On Tue, Feb 16, 2021 at 07:30:23PM +0100, Paul van der Linden wrote:
> Hi,
> I have 2 functions:
> CREATE FUNCTION foo(a text, b text, c text) RETURNS text AS
> $func$
> DECLARE
>     retVal text;
> BEGIN
>     SELECT
>       CASE
>         WHEN a='v1' AND b='b1' THEN 'r1'
>         WHEN a='v1' THEN 'r2'
>         ... snip long list containing various tests on a,b and c
>         WHEN a='v50' THEN 'r50'
>       END INTO retval;
>     RETURN retVal;
> END
> $func$ LANGUAGE PLPGSQL IMMUTABLE;

If this function was converted to SQL function it could be faster, as it
could be inlined.

> CREATE FUNCTION bar(r text, geom geometry) RETURNS int AS
> $func$
> DECLARE
>     retVal int;
> BEGIN
>     SELECT
>       CASE
>         WHEN r='r1' AND st_area(geom)>100 THEN 1
>         WHEN r='r1' THEN 2
>         ... snip long list containing various tests on r and st_area(geom)
>         WHEN r='r50' THEN 25
>       END INTO retval;
>     RETURN retVal;
> END
> $func$ LANGUAGE PLPGSQL IMMUTABLE;

First thing that I notice is that it seems (from the code and comment)
that you run st_area(geom) multiple times.

If that's really the case - why don't you cache it in some variable?

declare
v_area float := st_area( geom );
begin 
...

and then use v_area instead of st_area(geom)

depesz




Re: Slow index creation

2021-02-17 Thread Ron

On 2/16/21 12:30 PM, Paul van der Linden wrote:

Hi,

I have 2 functions:
CREATE FUNCTION foo(a text, b text, c text) RETURNS text AS
$func$
DECLARE
    retVal text;
BEGIN
    SELECT
      CASE
        WHEN a='v1' AND b='b1' THEN 'r1'
        WHEN a='v1' THEN 'r2'
        ... snip long list containing various tests on a,b and c
        WHEN a='v50' THEN 'r50'
      END INTO retval;
    RETURN retVal;
END
$func$ LANGUAGE PLPGSQL IMMUTABLE;

CREATE FUNCTION bar(r text, geom geometry) RETURNS int AS
$func$
DECLARE
    retVal int;
BEGIN
    SELECT
      CASE
        WHEN r='r1' AND st_area(geom)>100 THEN 1
        WHEN r='r1' THEN 2
        ... snip long list containing various tests on r and st_area(geom)
        WHEN r='r50' THEN 25
      END INTO retval;
    RETURN retVal;
END
$func$ LANGUAGE PLPGSQL IMMUTABLE;

and a large table t (100M+ records) with columns a, b, c and geom running 
on PG 11, on spinning disks with 64GB memory and 28 cores.


When I create a simple geom index with CREATE INDEX ON t USING gist(geom) 
it finishes in about an hour, but when I create a partial index using 
these 2 functions
CREATE INDEX ON t USING gist(geom) WHERE bar(foo(a,b,c),geom)<12 it takes 
over 20 hours...


Is that because I'm using functions in the WHERE clause, or because my 
CASE lists are quite long, or both?


How long does SELECT a, b, c, foo(a, b, c) from blarg; take?
Ditto SELECT a, b, c, foo(a, b, c), bar(foo(a, b, c)) from blarg;
Ditto SELECT a, b, c, foo(a, b, c) from blarg where bar(foo(a, b, c)) < 12;

That'll narrow the problem.

Is there any way to speed up that index creation? Is upgrading to a newer 
postgres a viable option (so the JITTER can speed up the functions)?


Paul


--
Angular momentum makes the world go 'round.


Re: How to return a jsonb list of lists (with integers)

2021-02-17 Thread Pavel Stehule
st 17. 2. 2021 v 16:02 odesílatel Alexander Farber <
alexander.far...@gmail.com> napsal:

> I have tried switching to SELECT INTO, but IF FOUND is still always true,
> which gives me [ null, null, null ] for some users:
>
> SELECT JSONB_BUILD_ARRAY(
> SUM(CASE WHEN (player1 = in_viewer AND state1 =
> 'won')  OR (player2 = in_viewer AND state2 = 'won')  THEN 1 ELSE 0
> END)::integer,
> SUM(CASE WHEN (player1 = in_viewer AND state1 =
> 'lost') OR (player2 = in_viewer AND state2 = 'lost') THEN 1 ELSE 0
> END)::integer,
> SUM(CASE WHEN (player1 = in_viewer AND state1 =
> 'draw') OR (player2 = in_viewer AND state2 = 'draw') THEN 1 ELSE 0
> END)::integer
> )
> FROM words_games
> WHERE finished IS NOT NULL
> AND in_viewer IN (player1, player2)
> AND in_uid IN (player1, player2)
> INTO _versus;
>

It should be true always. The aggregate returns always one row

postgres=# SELECT sum(pocet_muzu) FROM obce WHERE false;
┌─┐
│ sum │
╞═╡
│   ∅ │
└─┘
(1 row)

Regards

Pavel


> IF FOUND THEN   -- for some reason this is always
> true
> out_data := JSONB_INSERT(out_data, '{versus}',
> _versus);
> END IF;
>
> What works for me is:
>
> IF _versus <> '[null,null,null]'::jsonb THEN
> out_data := JSONB_INSERT(out_data, '{versus}',
> _versus);
> END IF;
>
> Greetings
> Alex
>
>


Re: How to return a jsonb list of lists (with integers)

2021-02-17 Thread Pavel Stehule
st 17. 2. 2021 v 15:34 odesílatel Alexander Farber <
alexander.far...@gmail.com> napsal:

> Hi Pavel,
>
> why would SELECT INTO be better here?
>

Minimally it doen't use undocumented feature. And you can be sure, so the
query is evaluated really like a query.

The expressions are evaluated differently.

Regards

Pavel


> Thanks
> Alex
>
>


Logical replication stuck and no data being copied

2021-02-17 Thread anonymous001
I'm connecting from an aurora postgresql 12.4 instance to a postgresql 12.6
community edition.
The postgresql 12.6 is the publisher and the aurora db is the subscriber.
Most of the subscriptions and publishing worked. 

Except for one table, which I've created a separate publication for.

One table is stuck and not copying data

I checked with the following query.

WITH active_pub_sub AS
(
  select relname from pg_stat_all_tables a
  join pg_subscription_rel sr on sr.srrelid = a.relid
  where srsubstate!='r'
)
select relname as table_name,
   pg_size_pretty(pg_total_relation_size(relid)) as total_size,
   pg_size_pretty(pg_relation_size(relid)) as table_size,
   pg_size_pretty(pg_total_relation_size(relid) -
pg_relation_size(relid)) as index_size
from pg_catalog.pg_statio_user_tables where relname IN (SELECT * FROM
active_pub_sub)
 order by pg_relation_size(relid) desc;
   table_name| total_size | table_size | index_size 
-+++
 translationitem | 16 kB  | 0 bytes| 16 kB


This is what i see from the logs

2021-02-17 14:38:42 SAST [19462]: [3-1]
user=replication,db=prodza,app=sub_prodza_translationitem,client=13.x.x.x
LOG:  starting logical decoding for slot "sub_prodza_translationitem"
2021-02-17 14:38:42 SAST [19462]: [4-1]
user=replication,db=prodza,app=sub_prodza_translationitem,client=13.x.x.x
DETAIL:  Streaming transactions committing after 7AB/9CD78D60, reading WAL
from 7AB/9CD78D28.
2021-02-17 14:38:42 SAST [19462]: [5-1]
user=replication,db=prodza,app=sub_prodza_translationitem,client=13.x.x.x
LOG:  logical decoding found consistent point at 7AB/9CD78D28
2021-02-17 14:38:42 SAST [19462]: [6-1]
user=replication,db=prodza,app=sub_prodza_translationitem,client=13.x.x.x
DETAIL:  There are no running transactions.

from aurora
prodza=> select * from pg_subscription_rel r join pg_stat_all_tables b on
b.relid = r.srrelid and r.srsubstate='i';;
-[ RECORD 1 ]---+--
srsubid | 7768746
srrelid | 3553053
srsubstate  | i
srsublsn| 
relid   | 3553053
schemaname  | prodza
relname | translationitem
seq_scan| 6
seq_tup_read| 0
idx_scan| 0
idx_tup_fetch   | 0
n_tup_ins   | 15021981
n_tup_upd   | 0
n_tup_del   | 0
n_tup_hot_upd   | 0
n_live_tup  | 0
n_dead_tup  | 0
n_mod_since_analyze | 0
last_vacuum | 2021-02-17 14:23:43.23275+02
last_autovacuum | 2021-02-17 14:09:05.482797+02
last_analyze| 2021-02-17 14:23:43.232893+02
last_autoanalyze| 
vacuum_count| 1
autovacuum_count| 42
analyze_count   | 1
autoanalyze_count   | 0


I have tried:
ALTER SUBSCRIPTION sub_prodza_translationitem REFRESH PUBLICATION WITH
(COPY_DATA);
alter subscription sub_prodza_translationitem REFRESH PUBLICATION;

On the publisher.
# select * from pg_stat_activity where application_name
='sub_prodza_translationitem';
-[ RECORD 1 ]+
datid| 221550750
datname  | prodza
pid  | 19462
usesysid | 401960256
usename  | replication
application_name | sub_prodza_translationitem
client_addr  | 13.x.x.x.x
client_hostname  | 
client_port  | 53162
backend_start| 2021-02-17 14:38:42.299105+02
xact_start   | 
query_start  | 2021-02-17 14:38:42.382807+02
state_change | 2021-02-17 14:38:42.43461+02
wait_event_type  | Client
wait_event   | WalSenderWaitForWAL
state| active
backend_xid  | 
backend_xmin | 
query| SELECT pg_catalog.set_config('search_path', '', false);
backend_type | walsender


I'm not certain why the snapshot and replication is not working. I'm
guessing the the snapshot of the table completed but the wal process is
stuck.






--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Slow while inserting and retrieval (compared to SQL Server)

2021-02-17 Thread Thomas Kellerer
sivapostg...@yahoo.com schrieb am 17.02.2021 um 14:27:
> We use datawindows.  Datawindows will send the required DML
> statements to the database.
> And it sent in format 1 .
>
> IN start of the application, Autocommit set to True.
> Before update of any table(s)
> Autocommit is set to False
> Insert/Update/Delete records
> If success commit else rollback
> Autocommit is set to True
>
> This has been followed for decades and it's working fine with Sql
> server.
>
> Here we are trying to insert just 10 records spread across 6 tables,
> which is taking more time.. that's what we feel.   The similar work
> in SQL Server takes much less time < as if no wait is there >.
>

The rollback can't work if autocommit is set to true.
Did you mean "IN start of the application, Autocommit set to FALSE"?

However these simple statements shouldn't take substantially longer
on Postgres than on SQL Server.

If you are DELETEing from tables with foreign keys: are all FK columns indexed?
That can make a huge difference if you delete from the parent table,
but the FK columns on the children aren't indexed.

It would be really interesting to see the execution plans generated
"explain (analyze)" for the DELETE and UPDATE statements.

ATTENTION: explain (analyze) actually runs that statement so it will do the
deletes or updates. Make sure you run that in a transaction so you can rollback.

Thomas




Re: Slow index creation

2021-02-17 Thread Paul van der Linden
Well, first off it's annoying if I have to change the function and a
reindex afterwards, and secondly, lots of other queries are blocking on
that reindex query (basically everything needing a queryplan on that table).

Materializing is also an option but that too is taking its time.

As far as I know there's no parallelism used currently, and as per
documentation, only creating b-tree indices support parallelism..
Also my postgres installation (on windows) doesn't seem to do anything with
JIT (even after setting all the jit-related values to 0)

I was more trying to get a feeling on where the slowness is, and how to
improve that...

On Tue, Feb 16, 2021 at 7:45 PM Michael Lewis  wrote:

> What is your concern with it taking 20 hours vs 1 hour? Is this index
> re-created on a regular basis?
>
> Would it make any sense to materialize the value of foo(a,b,c) as a
> generated column (PG12+ natively, or maintained by a trigger before)? Or
> even bar(foo(a,b,c),geom)?
>
> Do you know if parallel_workers are being used?
>
> JIT is available in PG11, it is just off by default. If it is available,
> turning it on and trying it seems like the simplest check if it would speed
> up the index creation.
>


Re: Slow while inserting and retrieval (compared to SQL Server)

2021-02-17 Thread Benedict Holland
Yea. Let's see what the metrics actually are. It is possible that the data
gets loaded in instantly but for whatever reason you do t see the reflected
changes.

On Wed, Feb 17, 2021, 9:09 AM sivapostg...@yahoo.com 
wrote:

> So far no performance tuning done for sql server.  It works fine for the
> load. Even the express edition which is free, works better.  I don't think
> postgres will be so slow to insert such a low number of records in an empty
> database.
>
> I'll be preparing the required sql statements to insert those records in
> pgadmin and see the timings, tomorrow.
>
>
> On Wednesday, 17 February, 2021, 07:29:29 pm IST, Benedict Holland <
> benedict.m.holl...@gmail.com> wrote:
>
>
> Sql server is a 10k dollar to 1 million dollar application (or more) that
> is specifically optimized for windows and had limited to no support
> anywhere else. Postgres is free and from my experience, comes within 5% of
> any other dbms. Inserting 1 row at a time with auto commit on will be a bit
> slow but it shouldn't be noticeable. What times are you seeing if you do
> this with pgadmin4 compared to sql server? Also, have you done any
> performance tuning for postgres server? There are many documents detailing
> performance tuning your servers, like you probably did, at some point, with
> your sql server.
>
> Thanks,
> Ben
>
> On Wed, Feb 17, 2021, 8:28 AM sivapostg...@yahoo.com <
> sivapostg...@yahoo.com> wrote:
>
> We use datawindows.  Datawindows will send the required DML statements to
> the database.  And it sent in format 1 .
>
> IN start of the application, Autocommit set to True.
> Before update of any table(s)
> Autocommit is set to False
> Insert/Update/Delete records
> If success commit else rollback
> Autocommit is set to True
>
> This has been followed for decades and it's working fine with Sql server.
>
>
> Here we are trying to insert just 10 records spread across 6 tables, which
> is taking more time.. that's what we feel.   The similar work in SQL Server
> takes much less time < as if no wait is there >.
>
> On Wednesday, 17 February, 2021, 06:48:35 pm IST, Thomas Kellerer <
> sham...@gmx.net> wrote:
>
>
> sivapostg...@yahoo.com schrieb am 17.02.2021 um 13:01:
>
> > To populate some basic data we try to insert few records (max 4
> > records) in few tables (around 6 tables) from one window.  We feel
> > that the insert time taken is longer than the time taken while using
> > Sql Server.  We tested almost a similar window that updated the
> > similar table(s) in SQL server, which was faster.  With Postgres
> > database, we need to wait for a couple of seconds before the
> > insert/update is over, which we didn't feel in Sql Server.
>
>
>
> Are you doing single-row inserts like:
>
> insert into ... values (..);
> insert into ... values (..);
> insert into ... values (..);
> insert into ... values (..);
>
> or are you doing multi-row inserts like this:
>
> insert into ... values (..), (..), (..), (..);
>
> Typically the latter will perform much better (especially if autocommit is
> enabled)
>
>
>
>
>


Re: How to return a jsonb list of lists (with integers)

2021-02-17 Thread Alexander Farber
I have tried switching to SELECT INTO, but IF FOUND is still always true,
which gives me [ null, null, null ] for some users:

SELECT JSONB_BUILD_ARRAY(
SUM(CASE WHEN (player1 = in_viewer AND state1 =
'won')  OR (player2 = in_viewer AND state2 = 'won')  THEN 1 ELSE 0
END)::integer,
SUM(CASE WHEN (player1 = in_viewer AND state1 =
'lost') OR (player2 = in_viewer AND state2 = 'lost') THEN 1 ELSE 0
END)::integer,
SUM(CASE WHEN (player1 = in_viewer AND state1 =
'draw') OR (player2 = in_viewer AND state2 = 'draw') THEN 1 ELSE 0
END)::integer
)
FROM words_games
WHERE finished IS NOT NULL
AND in_viewer IN (player1, player2)
AND in_uid IN (player1, player2)
INTO _versus;

IF FOUND THEN   -- for some reason this is always
true
out_data := JSONB_INSERT(out_data, '{versus}',
_versus);
END IF;

What works for me is:

IF _versus <> '[null,null,null]'::jsonb THEN
out_data := JSONB_INSERT(out_data, '{versus}',
_versus);
END IF;

Greetings
Alex


Re: How to return a jsonb list of lists (with integers)

2021-02-17 Thread Alexander Farber
Hi Pavel,

why would SELECT INTO be better here?

Thanks
Alex


Re: Slow while inserting and retrieval (compared to SQL Server)

2021-02-17 Thread Benedict Holland
Sql server is a 10k dollar to 1 million dollar application (or more) that
is specifically optimized for windows and had limited to no support
anywhere else. Postgres is free and from my experience, comes within 5% of
any other dbms. Inserting 1 row at a time with auto commit on will be a bit
slow but it shouldn't be noticeable. What times are you seeing if you do
this with pgadmin4 compared to sql server? Also, have you done any
performance tuning for postgres server? There are many documents detailing
performance tuning your servers, like you probably did, at some point, with
your sql server.

Thanks,
Ben

On Wed, Feb 17, 2021, 8:28 AM sivapostg...@yahoo.com 
wrote:

> We use datawindows.  Datawindows will send the required DML statements to
> the database.  And it sent in format 1 .
>
> IN start of the application, Autocommit set to True.
> Before update of any table(s)
> Autocommit is set to False
> Insert/Update/Delete records
> If success commit else rollback
> Autocommit is set to True
>
> This has been followed for decades and it's working fine with Sql server.
>
>
> Here we are trying to insert just 10 records spread across 6 tables, which
> is taking more time.. that's what we feel.   The similar work in SQL Server
> takes much less time < as if no wait is there >.
>
> On Wednesday, 17 February, 2021, 06:48:35 pm IST, Thomas Kellerer <
> sham...@gmx.net> wrote:
>
>
> sivapostg...@yahoo.com schrieb am 17.02.2021 um 13:01:
>
> > To populate some basic data we try to insert few records (max 4
> > records) in few tables (around 6 tables) from one window.  We feel
> > that the insert time taken is longer than the time taken while using
> > Sql Server.  We tested almost a similar window that updated the
> > similar table(s) in SQL server, which was faster.  With Postgres
> > database, we need to wait for a couple of seconds before the
> > insert/update is over, which we didn't feel in Sql Server.
>
>
>
> Are you doing single-row inserts like:
>
> insert into ... values (..);
> insert into ... values (..);
> insert into ... values (..);
> insert into ... values (..);
>
> or are you doing multi-row inserts like this:
>
> insert into ... values (..), (..), (..), (..);
>
> Typically the latter will perform much better (especially if autocommit is
> enabled)
>
>
>
>
>


Re: Slow while inserting and retrieval (compared to SQL Server)

2021-02-17 Thread sivapostg...@yahoo.com
 We use datawindows.  Datawindows will send the required DML statements to the 
database.  And it sent in format 1 .  
IN start of the application, Autocommit set to True.Before update of any 
table(s)Autocommit is set to FalseInsert/Update/Delete recordsIf success commit 
else rollbackAutocommit is set to True
This has been followed for decades and it's working fine with Sql server.   
Here we are trying to insert just 10 records spread across 6 tables, which is 
taking more time.. that's what we feel.   The similar work in SQL Server takes 
much less time < as if no wait is there >.  
On Wednesday, 17 February, 2021, 06:48:35 pm IST, Thomas Kellerer 
 wrote:  
 
 sivapostg...@yahoo.com schrieb am 17.02.2021 um 13:01:
> To populate some basic data we try to insert few records (max 4
> records) in few tables (around 6 tables) from one window.  We feel
> that the insert time taken is longer than the time taken while using
> Sql Server.  We tested almost a similar window that updated the
> similar table(s) in SQL server, which was faster.  With Postgres
> database, we need to wait for a couple of seconds before the
> insert/update is over, which we didn't feel in Sql Server.


Are you doing single-row inserts like:

    insert into ... values (..);
    insert into ... values (..);
    insert into ... values (..);
    insert into ... values (..);

or are you doing multi-row inserts like this:

    insert into ... values (..), (..), (..), (..);

Typically the latter will perform much better (especially if autocommit is 
enabled)



  

Re: Slow while inserting and retrieval (compared to SQL Server)

2021-02-17 Thread Thomas Kellerer
sivapostg...@yahoo.com schrieb am 17.02.2021 um 13:01:
> To populate some basic data we try to insert few records (max 4
> records) in few tables (around 6 tables) from one window.  We feel
> that the insert time taken is longer than the time taken while using
> Sql Server.   We tested almost a similar window that updated the
> similar table(s) in SQL server, which was faster.  With Postgres
> database, we need to wait for a couple of seconds before the
> insert/update is over, which we didn't feel in Sql Server.


Are you doing single-row inserts like:

insert into ... values (..);
insert into ... values (..);
insert into ... values (..);
insert into ... values (..);

or are you doing multi-row inserts like this:

insert into ... values (..), (..), (..), (..);

Typically the latter will perform much better (especially if autocommit is 
enabled)





Re: Slow while inserting and retrieval (compared to SQL Server)

2021-02-17 Thread luis . roberto


- Mensagem original -
> De: "sivapostgres" 
> Para: "luis.roberto" 
> Cc: "pgsql-general" 
> Enviadas: Quarta-feira, 17 de fevereiro de 2021 9:54:18
> Assunto: Re: Slow while inserting and retrieval (compared to SQL Server)

> Hello,
> Yes, that's what I feel. With no records in any tables, working from the same
> machine where PG has been installed, with only one user working, inserting few
> records (10 records in total, in all 6 tables) should not take this much time.

> I'll be inserting records from PowerBuilder applications, and how to catch the
> result of Explain. OR should I run all insert commands in PG Admin or so and
> catch those results?

> As I'm new to PG, any documentation/help in this direction will be useful.

> Happiness Always
> BKR Sivaprakash

> On Wednesday, 17 February, 2021, 05:35:43 pm IST, 
> 
> wrote:


You can run it wherever you prefer. I'm not familiar with PowerBuilder, so I'd 
say PGAdmin.
BTW, if you run the INSERTs on PGAdmin, do you still seeing slow timings?




Re: Slow while inserting and retrieval (compared to SQL Server)

2021-02-17 Thread sivapostg...@yahoo.com
 Hello,Yes, that's what I feel.  With no records in any tables, working from 
the same machine where PG has been installed, with only one user working, 
inserting few records (10 records in total, in all 6 tables)  should not take 
this much time.  
I'll be inserting records from PowerBuilder applications, and how to catch the 
result of Explain.  OR should I run all insert commands in PG Admin or so and 
catch those results?  
As I'm new to PG, any documentation/help in this direction will be useful.
Happiness AlwaysBKR Sivaprakash

On Wednesday, 17 February, 2021, 05:35:43 pm IST, 
 wrote:  
 
 - Mensagem original -
> De: sivapostg...@yahoo.com
> Para: "pgsql-general" 
> Enviadas: Quarta-feira, 17 de fevereiro de 2021 9:01:15
> Assunto: Re: Slow while inserting and retrieval (compared to SQL Server)
> 
> Hello, 
> 
> Using Postgres 11 in Windows Server 2012 & Powerbuilder 
> Working from the same machine where Postgres 11 is installed. So no chance 
> for any network issue, I feel. 
> No setup/config change done. Just working with all the default settings. 
> With no data in any of the 179 tables in that database. 
> 
> To populate some basic data we try to insert few records (max 4 records) in 
> few tables (around 6 tables) from one window. We feel that the insert time 
> taken is longer than the time taken while using Sql Server. We tested almost 
> a similar window that updated the similar table(s) in SQL server, which was > 
> > faster. With Postgres database, we need to wait for a couple of seconds 
> before the insert/update is over, which we didn't feel in Sql Server. 
> 
> I feel that some changes in settings might improve this speed, but with not 
> much knowledge in Postgres I struggle to find out those setup values. 
> 
> Any help in improving the speed is really appreciated. 
> 
> Happiness Always 
> BKR Sivaprakash 

Can you run EXPLAIN (ANALYZE,BUFFERS) on the INSERT command and post the 
results here? Usually inserting this many records should be instantaneous.


  

Re: Slow while inserting and retrieval (compared to SQL Server)

2021-02-17 Thread luis . roberto
- Mensagem original -
> De: sivapostg...@yahoo.com
> Para: "pgsql-general" 
> Enviadas: Quarta-feira, 17 de fevereiro de 2021 9:01:15
> Assunto: Re: Slow while inserting and retrieval (compared to SQL Server)
> 
> Hello, 
> 
> Using Postgres 11 in Windows Server 2012 & Powerbuilder 
> Working from the same machine where Postgres 11 is installed. So no chance 
> for any network issue, I feel. 
> No setup/config change done. Just working with all the default settings. 
> With no data in any of the 179 tables in that database. 
> 
> To populate some basic data we try to insert few records (max 4 records) in 
> few tables (around 6 tables) from one window. We feel that the insert time 
> taken is longer than the time taken while using Sql Server. We tested almost 
> a similar window that updated the similar table(s) in SQL server, which was > 
> > faster. With Postgres database, we need to wait for a couple of seconds 
> before the insert/update is over, which we didn't feel in Sql Server. 
> 
> I feel that some changes in settings might improve this speed, but with not 
> much knowledge in Postgres I struggle to find out those setup values. 
> 
> Any help in improving the speed is really appreciated. 
> 
> Happiness Always 
> BKR Sivaprakash 

Can you run EXPLAIN (ANALYZE,BUFFERS) on the INSERT command and post the 
results here? Usually inserting this many records should be instantaneous.




Re: Slow while inserting and retrieval (compared to SQL Server)

2021-02-17 Thread sivapostg...@yahoo.com
Hello,
Using Postgres 11 in Windows Server 2012 & PowerbuilderWorking from the same 
machine where Postgres 11 is installed.   So no chance for any network issue, I 
feel.
No setup/config change done.  Just working with all the default settings.With 
no data in any of the 179 tables in that database.
To populate some basic data we try to insert few records (max 4 records) in few 
tables (around 6 tables) from one window.  We feel that the insert time taken 
is longer than the time taken while using Sql Server.   We tested almost a 
similar window that updated the similar table(s) in SQL server, which was 
faster.  With Postgres database, we need to wait for a couple of seconds before 
the insert/update is over, which we didn't feel in Sql Server. 

I feel that some changes in settings might improve this speed, but with not 
much knowledge in Postgres I struggle to find out those setup values.  
Any help in improving the speed is really appreciated.  
Happiness AlwaysBKR Sivaprakash


Re: How to return a jsonb list of lists (with integers)

2021-02-17 Thread Pavel Stehule
Hi

st 17. 2. 2021 v 11:40 odesílatel Alexander Farber <
alexander.far...@gmail.com> napsal:

> I have tried the following, but IF FOUND is always false for some reason:
>
> _versus := JSONB_BUILD_ARRAY(
> SUM(CASE WHEN (player1 = in_uid AND state1 =
> 'won')  OR (player2 = in_uid AND state2 = 'won')  THEN 1 ELSE 0
> END)::integer,
> SUM(CASE WHEN (player1 = in_uid AND state1 =
> 'lost') OR (player2 = in_uid AND state2 = 'lost') THEN 1 ELSE 0
> END)::integer,
> SUM(CASE WHEN (player1 = in_uid AND state1 =
> 'draw') OR (player2 = in_uid AND state2 = 'draw') THEN 1 ELSE 0
> END)::integer
> )
> FROM words_games
> WHERE finished IS NOT NULL
> AND (
> (player1 = in_uid AND player2 = in_opponent) OR
> (player2 = in_uid AND player1 = in_opponent)
> );
>

Don't do this. When you want to use a query, then use SELECT INTO.

Regards

Pavel



> IF FOUND THEN
> out_data := JSONB_INSERT(out_data, '{versus}',
> _versus);
> END IF;
>
>


Re: How to return a jsonb list of lists (with integers)

2021-02-17 Thread Alexander Farber
I have ended up with the following (to avoid returning [null, null, null]
for players who never played with each other):

_versus := JSONB_BUILD_ARRAY(
SUM(CASE WHEN (player1 = in_uid AND state1 =
'won')  OR (player2 = in_uid AND state2 = 'won')  THEN 1 ELSE 0
END)::integer,
SUM(CASE WHEN (player1 = in_uid AND state1 =
'lost') OR (player2 = in_uid AND state2 = 'lost') THEN 1 ELSE 0
END)::integer,
SUM(CASE WHEN (player1 = in_uid AND state1 =
'draw') OR (player2 = in_uid AND state2 = 'draw') THEN 1 ELSE 0
END)::integer
)
FROM words_games
WHERE finished IS NOT NULL
AND (
(player1 = in_uid AND player2 = in_opponent) OR
(player2 = in_uid AND player1 = in_opponent)
);

IF _versus <> '[null, null, null]'::jsonb THEN
out_data := JSONB_INSERT(out_data, '{versus}',
_versus);
END IF;


Re: How to return a jsonb list of lists (with integers)

2021-02-17 Thread Alexander Farber
I have tried the following, but IF FOUND is always false for some reason:

_versus := JSONB_BUILD_ARRAY(
SUM(CASE WHEN (player1 = in_uid AND state1 =
'won')  OR (player2 = in_uid AND state2 = 'won')  THEN 1 ELSE 0
END)::integer,
SUM(CASE WHEN (player1 = in_uid AND state1 =
'lost') OR (player2 = in_uid AND state2 = 'lost') THEN 1 ELSE 0
END)::integer,
SUM(CASE WHEN (player1 = in_uid AND state1 =
'draw') OR (player2 = in_uid AND state2 = 'draw') THEN 1 ELSE 0
END)::integer
)
FROM words_games
WHERE finished IS NOT NULL
AND (
(player1 = in_uid AND player2 = in_opponent) OR
(player2 = in_uid AND player1 = in_opponent)
);

IF FOUND THEN
out_data := JSONB_INSERT(out_data, '{versus}',
_versus);
END IF;


Re: PostgreSQL Replication

2021-02-17 Thread Thomas Guyot
On 2021-02-17 04:22, Mutuku Ndeti wrote:
> Thank you. 
> 
> I agree with you. Single master, with a standby replica, seems easier to
> manage. Is there a way to automatically promote the standby, when the
> active master fails? Is it feasible to have 2 instances of the
> application, writing onto the same DB, reason for two instances of the
> application is to allow for redundancy/load balancing. 
> 

Again, not being PostgreSQL-specific, it is possible to have an
active/standby master node with shared storage (ex fibrechannel,
iscsi... it can only be mounted on only one node at any given time and
it's the cluster's responsibility to ensure the other node has unmounted
properly). This is ideal when dealing with fully redundant storage
arrays, and fairly simple to setup on the cluster side. Attaching
additional slaves to an active/standby cluster is also usually simpler
since there is just one server/replication log.

Another solution is to have a replication ring (ex. A replicates to B, B
replicates to A). With such setup it is very important writes only ever
happen on a single node, and is easiest to do so by using a floating IP
that can only be reached from one node at any given time (again it's the
cluster's responsibility to ensure the floating IP is never on both).
This method may require a more complex configuration to ensure updates
are consistent and to keep additional slaves synchronized to the
cluster. Also since both servers have a permanent IP it is important to
ensure no updates are made directly to them. The warm side can still be
used for read-only access though (with a very small lag).

In either case, there is likely some configuration needed. This page
seems to list a few projects that may make your cluster setup much easier:

https://wiki.postgresql.org/wiki/Replication,_Clustering,_and_Connection_Pooling

Regards,

--
Thomas




Re: PostgreSQL Replication

2021-02-17 Thread Mutuku Ndeti
Thank you.

I agree with you. Single master, with a standby replica, seems easier to
manage. Is there a way to automatically promote the standby, when the
active master fails? Is it feasible to have 2 instances of the application,
writing onto the same DB, reason for two instances of the application is to
allow for redundancy/load balancing.



On Wed, Feb 17, 2021 at 11:22 AM Thomas Guyot  wrote:

> On 2021-02-16 09:28, Raul Giucich wrote:
> > This article will help you
> > https://wiki.postgresql.org/wiki/Multimaster
> > .
> >
> > El mar., 16 feb. 2021 10:56, Mutuku Ndeti  > > escribió:
> >
> > Hi,
> >
> > Need some advice here. I have an application using PostgreSQL. I
> > need to install it on 2 servers for redundancy purposes and have 2
> > databases. I need the DBs to replicate to each other, in real-time.
> > Writes can be done on both DBs.
> >
> > Please let me know if this is a feasible setup and the best way to
> > proceed.
> >
>
> Hi,
>
> While I have no experience with replication on pgsql, in general
> multi-master database replication is much more complex and often require
> a pretty rigid setup. The graphs on that page seems to tell the same
> story for pgsql.
>
> Are you sure you really need multi-master replication as opposed to
> having a single active master in a replicated set? If properly
> configured, cluster software can automatically fail over the active
> master, which provides very good redundancy and is much simpler from a
> technological standpoint.
>
> Regards,
>
> --
> Thomas
>


-- 
www.agile.co.ke


Re: How to return a jsonb list of lists (with integers)

2021-02-17 Thread Alexander Farber
Hello, thank you for the helpful replies.

I have decided to go with PL/PgSQL for now and also switched from JSONB
list of lists to map of lists.

And the custom stored function below works mostly well, except for a
special case -

CREATE OR REPLACE FUNCTION words_stat_charts(
in_uid   integer,
in_opponent  integer, -- optional parameter, can be NULL
OUT out_data jsonb
) RETURNS jsonb AS
$func$
BEGIN
out_data := JSONB_BUILD_OBJECT();

-- add a JSON list with 7 integers
out_data := JSONB_INSERT(out_data, '{length}', JSONB_BUILD_ARRAY(
SUM(CASE WHEN LENGTH(word) = 2 THEN 1 ELSE 0 END)::integer,
SUM(CASE WHEN LENGTH(word) = 3 THEN 1 ELSE 0 END)::integer,
SUM(CASE WHEN LENGTH(word) = 4 THEN 1 ELSE 0 END)::integer,
SUM(CASE WHEN LENGTH(word) = 5 THEN 1 ELSE 0 END)::integer,
SUM(CASE WHEN LENGTH(word) = 6 THEN 1 ELSE 0 END)::integer,
SUM(CASE WHEN LENGTH(word) = 7 THEN 1 ELSE 0 END)::integer,
SUM(CASE WHEN LENGTH(word) > 7 THEN 1 ELSE 0 END)::integer
))

-- add a JSON list with 3 integers
FROM words_scores WHERE uid = in_uid;
out_data := JSONB_INSERT(out_data, '{results}', JSONB_BUILD_ARRAY(
SUM(CASE WHEN (player1 = in_uid AND state1 = 'won')  OR
(player2 = in_uid AND state2 = 'won')  THEN 1 ELSE 0 END)::integer,
SUM(CASE WHEN (player1 = in_uid AND state1 = 'lost') OR
(player2 = in_uid AND state2 = 'lost') THEN 1 ELSE 0 END)::integer,
SUM(CASE WHEN (player1 = in_uid AND state1 = 'draw') OR
(player2 = in_uid AND state2 = 'draw') THEN 1 ELSE 0 END)::integer
))
FROM words_games
WHERE finished IS NOT NULL
AND in_uid IN (player1, player2);

-- add a JSON list with 3 integers, but only if in_opponent param
is supplied
IF in_opponent > 0 AND in_opponent <> in_uid THEN
out_data := JSONB_INSERT(out_data, '{versus}',
JSONB_BUILD_ARRAY(
SUM(CASE WHEN (player1 = in_uid AND state1 =
'won')  OR (player2 = in_uid AND state2 = 'won')  THEN 1 ELSE 0
END)::integer,
SUM(CASE WHEN (player1 = in_uid AND state1 =
'lost') OR (player2 = in_uid AND state2 = 'lost') THEN 1 ELSE 0
END)::integer,
SUM(CASE WHEN (player1 = in_uid AND state1 =
'draw') OR (player2 = in_uid AND state2 = 'draw') THEN 1 ELSE 0
END)::integer
))
FROM words_games
WHERE finished IS NOT NULL
AND (
(player1 = in_uid AND player2 = in_opponent) OR
(player2 = in_uid AND player1 = in_opponent)
);
END IF;

END
$func$ LANGUAGE plpgsql;

The function works well:

# select * from words_stat_charts(5, 6);
 out_data
---
 {"length": [2726, 2825, 2341, 1363, 394, 126, 68], "versus": [6, 3, 0],
"results": [298, 151, 0]}
(1 row)

Except when 2 players never played with each other - then I get [ null,
null, null ]:

# select * from words_stat_charts(5, 1);
  out_data

 {"length": [2726, 2825, 2341, 1363, 394, 126, 68], "versus": [null, null,
null], "results": [298, 151, 0]}
(1 row)

Is there maybe a nice trick to completely omit "versus" from the returned
JSONB map of lists when its [ null, null, null ]?

Thank you
Alex


Re: PostgreSQL Replication

2021-02-17 Thread Thomas Guyot
On 2021-02-16 09:28, Raul Giucich wrote:
> This article will help you
> https://wiki.postgresql.org/wiki/Multimaster
> . 
> 
> El mar., 16 feb. 2021 10:56, Mutuku Ndeti  > escribió:
> 
> Hi,
> 
> Need some advice here. I have an application using PostgreSQL. I
> need to install it on 2 servers for redundancy purposes and have 2
> databases. I need the DBs to replicate to each other, in real-time.
> Writes can be done on both DBs. 
> 
> Please let me know if this is a feasible setup and the best way to
> proceed. 
> 

Hi,

While I have no experience with replication on pgsql, in general
multi-master database replication is much more complex and often require
a pretty rigid setup. The graphs on that page seems to tell the same
story for pgsql.

Are you sure you really need multi-master replication as opposed to
having a single active master in a replicated set? If properly
configured, cluster software can automatically fail over the active
master, which provides very good redundancy and is much simpler from a
technological standpoint.

Regards,

--
Thomas