Re: Internationalisation (i18n) with Postgres as backend

2021-06-01 Thread Rob Sargent

On 6/1/21 5:00 PM, Laura Smith wrote:

What is your notion of "object".  I first assumed it was akin to
"document" but then pages have objects.

I think my terminology is a bit off.

A document/page has object(s) on it.

Or, perhaps better expressed, think of document/page as the template and 
object(s) is what fills the gaps in the template.
Then I take it this template (layout?) is to be re-used across disparate 
content.  So /instances/ of template applied to content (of vice versa?) 
are documents.  Two separate domains to be managed, no?  Is this an 
authoring system?  Hard copy or digital presentation?


Re: Internationalisation (i18n) with Postgres as backend

2021-06-01 Thread Laura Smith
Hi Steve,

I didn't consider hstore, I did consider jsonb though.

The thing that made me lean towards individual rows rather than consolidated 
was that I thought versioning would ultimately be easier/cleaner to achieve 
with individual rows (e.g. using tsrange & gist exclude).  But willing to be 
proven wrong.

Laura

‐‐‐ Original Message ‐‐‐
On Tuesday, 1 June 2021 22:10, Steve Baldwin  wrote:

> Hi Laura,
>
> Did you consider using hstore to store language and data as a kvp? For 
> example:
>
> b2bc_owner@b2bcreditonline=# create table langtest(pageid text, objectid 
> text, objectdata hstore, constraint langtest_pk primary key (pageid, 
> objectid));
> CREATE TABLE
> b2bc_owner@b2bcreditonline=# insert into langtest values ('zz', 'abc', 
> '"en"=>"en for abc","de"=>"de for abc"');
> INSERT 0 1
> b2bc_owner@b2bcreditonline=# insert into langtest values ('zz', 'def', 
> '"en"=>"en for def"');
> INSERT 0 1
> b2bc_owner@b2bcreditonline=# create or replace function langtestfunc(text, 
> text, text[]) returns text language sql as $$ select a.data from langtest as 
> t, unnest(t.objectdata->$3) as a(data) where t.pageid = $1 and t.objectid = 
> $2 and a.data is not null limit 1 $$;
> b2bc_owner@b2bcreditonline=# select langtestfunc('zz', 'abc', array['de', 
> 'en']);
>  langtestfunc
> --
>  de for abc
> (1 row)
> b2bc_owner@b2bcreditonline=# select langtestfunc('zz', 'def', array['de', 
> 'en']);
>  langtestfunc
> --
>  en for def
> (1 row)
>
> Just a thought.
>
> Cheers,
>
> Steve
>
> On Wed, Jun 2, 2021 at 6:09 AM Laura Smith 
>  wrote:
>
> > Hi,
> >
> > I'm creating a Postgres backend for an internal tool which is essentially a 
> > very simple implementation of multi-lingual CMS.
> >
> > So far my thoughts are along the lines of the below, but I would appreciate 
> > a second (or more !) pair of eyes from some Postgresql gurus.  I am 
> > especially interested in feedback and suggestions in relation to the 
> > following questions:
> >
> > (a) Is this going to work as expected (i.e. have I missed some obvious 
> > foot-guns ?)
> >
> > (b) Is this manner of doing things reasonably efficient or are there better 
> > ways I should be thinking of ? (bear in mind the schema is not set in 
> > stone, so completely out of the box suggestions welcome !).
> >
> > The basic design concept (oversimplified) is:  For each page, you have one 
> > or more objects and those objects may have content in one or more languages.
> >
> > create table langtest(
> > pageid text not null,
> > objectid text not null ,
> > objectlang text not null,
> > objectdata text not null);
> >
> > create unique index on (pageid,objectid,objectlang);
> >
> > insert into langTest(pageID,objectID,objectLang,objectData) values 
> > ('zzz','abc','en','Lorem ipsum dolor sit amet');
> > insert into langTest(pageID,objectID,objectLang,objectData) values 
> > ('zzz','abc','de','Amet sit dolor ipsum lorem');
> > insert into langTest(pageID,objectID,objectLang,objectData) values 
> > ('zzz','def','en','Dolor ipsum amet sit lorem');
> >
> > select distinct on(objectid)objectid,objectlang,pageid,objectdata from 
> > langTest where pageid='zzz' and objectLang = any('{de,en}'::text[]) order 
> > by objectid,array_position('{de,en}'::text[],objectLang);
> >
> > (The idea being that the select query will be wrapped into a function which 
> > the frontend will call, passing a list of elegible languages as input)
> >
> > Thanks !
> >
> > Laura




Re: Internationalisation (i18n) with Postgres as backend

2021-06-01 Thread Laura Smith


> What is your notion of "object".  I first assumed it was akin to
> "document" but then pages have objects.

I think my terminology is a bit off.

A document/page has object(s) on it.

Or, perhaps better expressed, think of document/page as the template and 
object(s) is what fills the gaps in the template.




Re: Internationalisation (i18n) with Postgres as backend

2021-06-01 Thread Steve Baldwin
Hi Laura,

Did you consider using hstore to store language and data as a kvp? For
example:

b2bc_owner@b2bcreditonline=# create table langtest(pageid text, objectid
text, objectdata hstore, constraint langtest_pk primary key (pageid,
objectid));
CREATE TABLE
b2bc_owner@b2bcreditonline=# insert into langtest values ('zz', 'abc',
'"en"=>"en for abc","de"=>"de for abc"');
INSERT 0 1
b2bc_owner@b2bcreditonline=# insert into langtest values ('zz', 'def',
'"en"=>"en for def"');
INSERT 0 1
b2bc_owner@b2bcreditonline=# create or replace function langtestfunc(text,
text, text[]) returns text language sql as $$ select a.data from langtest
as t, unnest(t.objectdata->$3) as a(data) where t.pageid = $1 and
t.objectid = $2 and a.data is not null limit 1 $$;
b2bc_owner@b2bcreditonline=# select langtestfunc('zz', 'abc', array['de',
'en']);
 langtestfunc
--
 de for abc
(1 row)
b2bc_owner@b2bcreditonline=# select langtestfunc('zz', 'def', array['de',
'en']);
 langtestfunc
--
 en for def
(1 row)

Just a thought.

Cheers,

Steve

On Wed, Jun 2, 2021 at 6:09 AM Laura Smith <
n5d9xq3ti233xiyif...@protonmail.ch> wrote:

> Hi,
>
> I'm creating a Postgres backend for an internal tool which is essentially
> a very simple implementation of multi-lingual CMS.
>
> So far my thoughts are along the lines of the below, but I would
> appreciate a second (or more !) pair of eyes from some Postgresql gurus.  I
> am especially interested in feedback and suggestions in relation to the
> following questions:
>
> (a) Is this going to work as expected (i.e. have I missed some obvious
> foot-guns ?)
>
> (b) Is this manner of doing things reasonably efficient or are there
> better ways I should be thinking of ? (bear in mind the schema is not set
> in stone, so completely out of the box suggestions welcome !).
>
> The basic design concept (oversimplified) is:  For each page, you have one
> or more objects and those objects may have content in one or more languages.
>
> create table langtest(
> pageid text not null,
> objectid text not null ,
> objectlang text not null,
> objectdata text not null);
>
> create unique index on (pageid,objectid,objectlang);
>
> insert into langTest(pageID,objectID,objectLang,objectData) values
> ('zzz','abc','en','Lorem ipsum dolor sit amet');
> insert into langTest(pageID,objectID,objectLang,objectData) values
> ('zzz','abc','de','Amet sit dolor ipsum lorem');
> insert into langTest(pageID,objectID,objectLang,objectData) values
> ('zzz','def','en','Dolor ipsum amet sit lorem');
>
> select distinct on(objectid)objectid,objectlang,pageid,objectdata from
> langTest where pageid='zzz' and objectLang = any('{de,en}'::text[]) order
> by objectid,array_position('{de,en}'::text[],objectLang);
>
> (The idea being that the select query will be wrapped into a function
> which the frontend will call, passing a list of elegible languages as input)
>
> Thanks !
>
> Laura
>
>
>


Re: Internationalisation (i18n) with Postgres as backend

2021-06-01 Thread Rob Sargent

On 6/1/21 2:09 PM, Laura Smith wrote:

Hi,

I'm creating a Postgres backend for an internal tool which is essentially a 
very simple implementation of multi-lingual CMS.

So far my thoughts are along the lines of the below, but I would appreciate a 
second (or more !) pair of eyes from some Postgresql gurus.  I am especially 
interested in feedback and suggestions in relation to the following questions:

(a) Is this going to work as expected (i.e. have I missed some obvious 
foot-guns ?)

(b) Is this manner of doing things reasonably efficient or are there better 
ways I should be thinking of ? (bear in mind the schema is not set in stone, so 
completely out of the box suggestions welcome !).

The basic design concept (oversimplified) is:  For each page, you have one or 
more objects and those objects may have content in one or more languages.

create table langtest(
pageid text not null,
objectid text not null ,
objectlang text not null,
objectdata text not null);

create unique index on (pageid,objectid,objectlang);

insert into langTest(pageID,objectID,objectLang,objectData) values 
('zzz','abc','en','Lorem ipsum dolor sit amet');
insert into langTest(pageID,objectID,objectLang,objectData) values 
('zzz','abc','de','Amet sit dolor ipsum lorem');
insert into langTest(pageID,objectID,objectLang,objectData) values 
('zzz','def','en','Dolor ipsum amet sit lorem');

select distinct on(objectid)objectid,objectlang,pageid,objectdata from langTest 
where pageid='zzz' and objectLang = any('{de,en}'::text[]) order by 
objectid,array_position('{de,en}'::text[],objectLang);

(The idea being that the select query will be wrapped into a function which the 
frontend will call, passing a list of elegible languages as input)

Thanks !

Laura

What is your notion of "object".  I first assumed it was akin to 
"document" but then pages have objects.






Internationalisation (i18n) with Postgres as backend

2021-06-01 Thread Laura Smith
Hi,

I'm creating a Postgres backend for an internal tool which is essentially a 
very simple implementation of multi-lingual CMS.

So far my thoughts are along the lines of the below, but I would appreciate a 
second (or more !) pair of eyes from some Postgresql gurus.  I am especially 
interested in feedback and suggestions in relation to the following questions:

(a) Is this going to work as expected (i.e. have I missed some obvious 
foot-guns ?)

(b) Is this manner of doing things reasonably efficient or are there better 
ways I should be thinking of ? (bear in mind the schema is not set in stone, so 
completely out of the box suggestions welcome !).

The basic design concept (oversimplified) is:  For each page, you have one or 
more objects and those objects may have content in one or more languages.

create table langtest(
pageid text not null,
objectid text not null ,
objectlang text not null,
objectdata text not null);

create unique index on (pageid,objectid,objectlang);

insert into langTest(pageID,objectID,objectLang,objectData) values 
('zzz','abc','en','Lorem ipsum dolor sit amet');
insert into langTest(pageID,objectID,objectLang,objectData) values 
('zzz','abc','de','Amet sit dolor ipsum lorem');
insert into langTest(pageID,objectID,objectLang,objectData) values 
('zzz','def','en','Dolor ipsum amet sit lorem');

select distinct on(objectid)objectid,objectlang,pageid,objectdata from langTest 
where pageid='zzz' and objectLang = any('{de,en}'::text[]) order by 
objectid,array_position('{de,en}'::text[],objectLang);

(The idea being that the select query will be wrapped into a function which the 
frontend will call, passing a list of elegible languages as input)

Thanks !

Laura




Re: CREATE/REFRESH MATERIALIZED VIEW planner difference?

2021-06-01 Thread Vijaykumar Jain
ok, so Tom ran on pg14  it seems. :)


On Wed, 2 Jun 2021 at 00:53, Thomas Munro  wrote:

> On Wed, Jun 2, 2021 at 7:15 AM Vijaykumar Jain
>  wrote:
> > i only get workers to create mv, but refresh mv plan does not use
> workers for the same conf params.
>
> Yeah, this changed in v14:
>
>
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=9e7ccd9ef64d05e87ceb1985d459bef9031205c0
>


-- 
Thanks,
Vijay
Mumbai, India


Re: CREATE/REFRESH MATERIALIZED VIEW planner difference?

2021-06-01 Thread Thomas Munro
On Wed, Jun 2, 2021 at 7:15 AM Vijaykumar Jain
 wrote:
> i only get workers to create mv, but refresh mv plan does not use workers for 
> the same conf params.

Yeah, this changed in v14:

https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=9e7ccd9ef64d05e87ceb1985d459bef9031205c0




Re: CREATE/REFRESH MATERIALIZED VIEW planner difference?

2021-06-01 Thread Vijaykumar Jain
ok i see this.
i may be wrong, but even when i force parallel cost to 0,
i only get workers to create mv, but refresh mv plan does not use workers
for the same conf params.

***
postgres=# create table if not exists t( id int primary key, value int );
CREATE TABLE
postgres=# insert into t select x,x from generate_series(1, 10) x;
INSERT 0 10
postgres=# analyze t;
ANALYZE
*

postgres=# drop materialized view mv;
DROP MATERIALIZED VIEW
postgres=# explain analyze create materialized view mv AS select
round(avg(id)), sum(id) from t, pg_sleep(10);
QUERY PLAN
---
 Aggregate  (cost=2943.02..2943.03 rows=1 width=40) (actual
time=10027.940..10027.941 rows=1 loops=1)
   ->  Nested Loop  (cost=0.00..2443.01 rows=10 width=4) (actual
time=10010.513..10022.985 rows=10 loops=1)
 ->  Function Scan on pg_sleep  (cost=0.00..0.01 rows=1 width=0)
(actual time=10010.497..10010.498 rows=1 loops=1)
 ->  Seq Scan on t  (cost=0.00..1443.00 rows=10 width=4)
(actual time=0.012..5.841 rows=10 loops=1)
 Planning Time: 0.245 ms
 Execution Time: 10039.621 ms
(6 rows)

postgres=# drop materialized view mv;
DROP MATERIALIZED VIEW
postgres=# set parallel_setup_cost=0;
SET
postgres=# set parallel_tuple_cost=0;
SET
postgres=# set min_parallel_table_scan_size=0;
SET
postgres=# set max_parallel_workers_per_gather=4;
SET
postgres=# explain analyze create materialized view mv AS select
round(avg(id)), sum(id) from t, pg_sleep(10);
  QUERY PLAN
--
 Finalize Aggregate  (cost=1318.04..1318.05 rows=1 width=40) (actual
time=10042.197..10042.457 rows=1 loops=1)
   ->  Gather  (cost=1318.00..1318.01 rows=4 width=40) (actual
time=10041.941..10042.450 rows=5 loops=1)
 Workers Planned: 4
 Workers Launched: 4
 ->  Partial Aggregate  (cost=1318.00..1318.01 rows=1 width=40)
(actual time=10035.167..10035.168 rows=1 loops=5)
   ->  Nested Loop  (cost=0.00..1193.00 rows=25000 width=4)
(actual time=10011.980..10033.456 rows=2 loops=5)
 ->  Parallel Seq Scan on t  (cost=0.00..693.00
rows=25000 width=4) (actual time=0.005..5.791 rows=2 loops=5)
 ->  Function Scan on pg_sleep  (cost=0.00..0.01 rows=1
width=0) (actual time=0.501..0.501 rows=1 loops=10)
 Planning Time: 0.105 ms
 Execution Time: 10059.992 ms
(10 rows)

postgres=# refresh materialized view mv;
REFRESH MATERIALIZED VIEW


*** auto explain in logs



2021-06-02 00:41:44.294 IST [2687] LOG:  statement: explain analyze create
materialized view mv AS select round(avg(id)), sum(id) from t, pg_sleep(10);
2021-06-02 00:41:54.361 IST [2687] LOG:  duration: 10059.566 ms  plan:
Query Text: explain analyze create materialized view mv AS select
round(avg(id)), sum(id) from t, pg_sleep(10);
Finalize Aggregate  (cost=1318.04..1318.05 rows=1 width=40) (actual
time=10042.197..10042.457 rows=1 loops=1)
  Output: round(avg(t.id), 0), sum(t.id)
  Buffers: shared hit=443
  ->  Gather  (cost=1318.00..1318.01 rows=4 width=40) (actual
time=10041.941..10042.450 rows=5 loops=1)
Output: (PARTIAL avg(t.id)), (PARTIAL sum(t.id))
Workers Planned: 4
Workers Launched: 4
Buffers: shared hit=443
->  Partial Aggregate  (cost=1318.00..1318.01 rows=1
width=40) (actual time=10035.167..10035.168 rows=1 loops=5)
  Output: PARTIAL avg(t.id), PARTIAL sum(t.id)
  Buffers: shared hit=443
  Worker 0:  actual time=10033.316..10033.316 rows=1
loops=1
Buffers: shared hit=62
  Worker 1:  actual time=10033.162..10033.163 rows=1
loops=1
Buffers: shared hit=55
  Worker 2:  actual time=10034.946..10034.946 rows=1
loops=1
Buffers: shared hit=117
  Worker 3:  actual time=10033.210..10033.211 rows=1
loops=1
Buffers: shared hit=103
  ->  Nested Loop  (cost=0.00..1193.00 rows=25000
width=4) (actual time=10011.980..10033.456 rows=2 loops=5)
Output: t.id
Buffers: shared hit=443
Worker 0:  actual time=10017.958..10032.681
rows=14012 loops=1
  Buffers: shared hit=62
Worker 1:  actual time=10014.150..10032.520
rows=12430 loops=1
  Buffers: shared hit=55

Re: CREATE/REFRESH MATERIALIZED VIEW planner difference?

2021-06-01 Thread Tom Lane
Philip Semanchuk  writes:
> I can confirm that it’s not waiting on a lock. In addition, through the AWS 
> CPU utilization monitor I can see that the REFRESH uses one CPU/worker 
> whereas the CREATE uses four. This is consistent with the EXPLAIN ANALYZE for 
> the CREATE which says it uses four workers.

Hm.  I tried to reproduce this here, and in a simple test case I get
parallelized plans for both CREATE and REFRESH.  Are you sure the
REFRESH is running with the same server parameter settings?

>> also, can you share the plans  where you see the diff.

> Unless I misunderstand, there is no plan for a REFRESH.

EXPLAIN isn't bright about that, but if you enable auto_explain,
it will log the plan for a REFRESH's query.

regards, tom lane




Re: CREATE/REFRESH MATERIALIZED VIEW planner difference?

2021-06-01 Thread Philip Semanchuk



> On Jun 1, 2021, at 2:20 PM, Vijaykumar Jain  
> wrote:
> 
> if you are not using it concurrently, can you confirm the there are *no 
> active* queries on the mv.
> refresh requires AccessExclusiveLock and will wait, till it gets one.
> just asking if you can rule out the extended time is not due to waiting for 
> lock.

I can confirm that it’s not waiting on a lock. In addition, through the AWS CPU 
utilization monitor I can see that the REFRESH uses one CPU/worker whereas the 
CREATE uses four. This is consistent with the EXPLAIN ANALYZE for the CREATE 
which says it uses four workers.


> also, can you share the plans  where you see the diff.

Unless I misunderstand, there is no plan for a REFRESH.


EXPLAIN (ANALYZE, BUFFERS) refresh materialized view my_mat_view
+---+
| QUERY PLAN|
|---|
| Utility statements have no plan structure |
+---+

Cheers
Philip




> 
> On Tue, 1 Jun 2021 at 23:30, Philip Semanchuk  
> wrote:
> Hi all,
> Should I expect a planner difference between CREATE MATERIALIZED VIEW and 
> REFRESH MATERIALIZED VIEW? We have a materialized view that uses 4 workers 
> during CREATE but only one worker during REFRESH, and as a result the refresh 
> takes much longer (~90 minutes vs. 30 minutes for the CREATE). So far this 
> behavior has been 100% consistent.
> 
> I'm running both the CREATE and REFRESH on the same server (Postgres 11.9 on 
> AWS Aurora). I don't think the refresh is using one worker in response to 
> other things happening on the server because we’ve observed this happening 
> when the server is not busy. We're not using the CONCURRENTLY option for 
> REFRESH.
> 
> THanks
> Philip
> 
> 
> 
> -- 
> Thanks,
> Vijay
> Mumbai, India





Re: CREATE/REFRESH MATERIALIZED VIEW planner difference?

2021-06-01 Thread Vijaykumar Jain
if you are not using it concurrently, can you confirm the there are *no
active* queries on the mv.
refresh requires AccessExclusiveLock and will wait, till it gets one.
just asking if you can rule out the extended time is not due to waiting for
lock.

also, can you share the plans  where you see the diff.





On Tue, 1 Jun 2021 at 23:30, Philip Semanchuk 
wrote:

> Hi all,
> Should I expect a planner difference between CREATE MATERIALIZED VIEW and
> REFRESH MATERIALIZED VIEW? We have a materialized view that uses 4 workers
> during CREATE but only one worker during REFRESH, and as a result the
> refresh takes much longer (~90 minutes vs. 30 minutes for the CREATE). So
> far this behavior has been 100% consistent.
>
> I'm running both the CREATE and REFRESH on the same server (Postgres 11.9
> on AWS Aurora). I don't think the refresh is using one worker in response
> to other things happening on the server because we’ve observed this
> happening when the server is not busy. We're not using the CONCURRENTLY
> option for REFRESH.
>
> THanks
> Philip
>
>

-- 
Thanks,
Vijay
Mumbai, India


CREATE/REFRESH MATERIALIZED VIEW planner difference?

2021-06-01 Thread Philip Semanchuk
Hi all,
Should I expect a planner difference between CREATE MATERIALIZED VIEW and 
REFRESH MATERIALIZED VIEW? We have a materialized view that uses 4 workers 
during CREATE but only one worker during REFRESH, and as a result the refresh 
takes much longer (~90 minutes vs. 30 minutes for the CREATE). So far this 
behavior has been 100% consistent.

I'm running both the CREATE and REFRESH on the same server (Postgres 11.9 on 
AWS Aurora). I don't think the refresh is using one worker in response to other 
things happening on the server because we’ve observed this happening when the 
server is not busy. We're not using the CONCURRENTLY option for REFRESH.

THanks
Philip



Re: Query on postgres_fdw extension

2021-06-01 Thread Swathi P
You are right, we added more than one
coordinator nodes for high availability and to avoid single point of
failure.

Thanks
Swathi

On Tue, Jun 1, 2021, 3:54 PM Bharath Rupireddy <
bharath.rupireddyforpostg...@gmail.com> wrote:

> On Tue, Jun 1, 2021 at 3:31 PM Etsuro Fujita 
> wrote:
> >
> > Hi,
> >
> > On Fri, May 14, 2021 at 6:08 PM Swathi P 
> wrote:
> > > In our sharding solution, we have multiple coodinator nodes. If we
> declare the table column as serial data type, we might end up having
> duplicate values for id column in the table_a in host_b (data node) as
> cconnections come from multiple coordinatoor nodes and might end up in
> duplicate key violations.
> > >
> > > Hence we decided to have the coordinator nodes as stateless and hence
> declared the column with no serial/sequence. Let me know if this makes
> sense.
> >
> > It seems reasonable to me to make coodinator nodes stateless, but may
> > I ask the reason you use multiple coordinator nodes?
>
> Perhaps, as a redundant node to avoid single point of failures? It's
> just a guess as I'm not the right one to answer that question though.
>
> With Regards,
> Bharath Rupireddy.
>


Re: Query on postgres_fdw extension

2021-06-01 Thread Bharath Rupireddy
On Tue, Jun 1, 2021 at 3:31 PM Etsuro Fujita  wrote:
>
> Hi,
>
> On Fri, May 14, 2021 at 6:08 PM Swathi P  wrote:
> > In our sharding solution, we have multiple coodinator nodes. If we declare 
> > the table column as serial data type, we might end up having duplicate 
> > values for id column in the table_a in host_b (data node) as cconnections 
> > come from multiple coordinatoor nodes and might end up in duplicate key 
> > violations.
> >
> > Hence we decided to have the coordinator nodes as stateless and hence 
> > declared the column with no serial/sequence. Let me know if this makes 
> > sense.
>
> It seems reasonable to me to make coodinator nodes stateless, but may
> I ask the reason you use multiple coordinator nodes?

Perhaps, as a redundant node to avoid single point of failures? It's
just a guess as I'm not the right one to answer that question though.

With Regards,
Bharath Rupireddy.




Re: Query on postgres_fdw extension

2021-06-01 Thread Etsuro Fujita
Hi,

On Fri, May 14, 2021 at 6:08 PM Swathi P  wrote:
> In our sharding solution, we have multiple coodinator nodes. If we declare 
> the table column as serial data type, we might end up having duplicate values 
> for id column in the table_a in host_b (data node) as cconnections come from 
> multiple coordinatoor nodes and might end up in duplicate key violations.
>
> Hence we decided to have the coordinator nodes as stateless and hence 
> declared the column with no serial/sequence. Let me know if this makes sense.

It seems reasonable to me to make coodinator nodes stateless, but may
I ask the reason you use multiple coordinator nodes?

Best regards,
Etsuro Fujita




RE: Framework for 0 downtime deploys

2021-06-01 Thread Zahir Lalani
Confidential

Thank you Nik

That’s very useful and will be checking out the migration test tool!

Z

From: Nikolay Samokhvalov 
Sent: 22 May 2021 13:32
To: Zahir Lalani 
Cc: pgsql-general@lists.postgresql.org
Subject: Re: Framework for 0 downtime deploys

No silver bullets exist (yet). A couple of things I can recommend:

1. GitLab's experience of changing schema without downtime and maintaining 
backward compatibility – they have open documentation and a lot of things 
solved and documented
- start here: 
https://docs.gitlab.com/ee/development/migration_style_guide.html
- check their migration_helpers.rb, plenty of useful things there

2. What my team and I are doing with respect to database migration testing: 
https://postgres.ai/. We created Database Lab Engine, an open-source tool to 
clone databases of any size in seconds and test, manually or automatically in 
CI, anything you want using "full-size" databases. It can help you catch and 
block dangerous changes leading to downtime, as well as (if you have a 
well-maintained set of tests for CI) enforce the backward compability.

Nik

On Sat, May 22, 2021 at 2:12 PM Zahir Lalani 
mailto:ZahirLalani@oliver.agency>> wrote:

Confidential

Hello All

I wonder if I could garner some of the wealth of experience on this group:

Our current application deployments (every 3 weeks) require about 30min 
downtime. We are now tasked of making this 0 downtime.
From all the reading I have done, we have solutions for the infrastructure and 
code deploy, but with regards to the DB the main issue seems to be keeping the 
new deploy backwards compatible – functions/tables/fields – all of it.

That seems like quite a large management task and would require careful reviews 
of changes. Is there any type of framework that already manages this type of 
capability? Or are there aspects of PG that we should be using in this regard?

Thx

Z