Re: Materialized Views - Way to refresh automatically (Incrementaly)

2023-05-11 Thread FOUTE K . Jaurès
hello Thomas,
Thanks, I'll check it out.

Le jeu. 11 mai 2023 à 12:01, Thomas Boussekey 
a écrit :

> Hello
>
> Le jeu. 11 mai 2023, 12:46, FOUTE K. Jaurès  a
> écrit :
>
>> Hello Everyone,
>>
>> Is There a way on PostgresQL 14 to automatically increment a Materialized
>> Views ?
>>
>
> Have a look at pg_ivm extension https://github.com/sraoss/pg_ivm
>
>>
>> --
>> Jaurès FOUTE
>>
>
> Hope this helps,
> Thomas
>
>>
>>

-- 
Jaurès FOUTE


Re: huge discrepancy between EXPLAIN cost and actual time (but the table has just been ANALYZED)

2023-05-11 Thread Kirk Wolak
On Mon, May 8, 2023 at 8:29 AM Kent Tong  wrote:

> Hi,
>
> I have a complex query involving over 15 joins and a CTE query and it
> takes over 17s to complete. The output of EXPLAIN ANALYZE includes
> (somewhere deep inside):
>
> Index Scan using document_pkey on document document0_  (cost=0.29..8.31
> rows=1 width=3027) (actual time=16243.959..16243.961 rows=1 loops=1)
>
>
> This shows an index scan with a very small cost but a very large actual
> time. The strange thing is, all the tables have just been analyzed with the
> ANALYZE command (it is not a foreign table). Furthermore, if I run a simple
> query using that index, both the cost and the actual time are small.
>
> Another snippet is:
>
>
>
>   -> CTE Scan on all_related_document p  (cost=1815513.32..3030511.77
> rows=241785 width=16) (actual time=203.969..203.976 rows=0 loops=1)
>
>
> I think the cost-actual time discrepancy is fine as it is a recursive CTE
> so postgresql can't estimate the cost well. It is materialized and a full
> table scan is performed. However, the actual time is not that bad.  Also,
> the estimated rows and the actual rows are also vastly different, but I
> guess this is fine, isn't it?
>
> Any idea how I should check further?
>
> Many thanks in advance
>
> --
> Kent Tong
> IT author and consultant, child education coach
>

Kent,
  I had a really slow CTE based query a while back.  The problem went away
when I materialized the query.
In our case, we had carried this query from another big DB, and used the
CTE query to force it to be evaluated once.
It is known to be slow, and about 1% of the size of the other tables we
were operating on.

  PG, by default, optimized this, and inlined it as a subquery.  Causing a
massive slowdown.  Materialize fixed it right up.
That would be a quick test.

  Outside of that, I agree with Tom... it's really hard to help without
full details.

>>> FWIW, if I had my way, THIS would work:
EXPLAIN (ANALYZE, BUFFERS, SETTINGS, RELATIONS_DDL, SHOW_QUERY) ;

Would Dump the table structures of all involved tables/views/indexes.  It
would repeat the query.
And in a perfect world, it would show you the "rewritten" query (now I am
dreaming).
All with the plan...

But it will take a while to get the level of detail to come out...  But the
AIs/ML will go crazy with it!

This way you just copy that output and share it.  (Eventually...)

Anyways, in the meantime, a query and the table structure/row counts would
be nice.

Kirk...


Re: "PANIC: could not open critical system index 2662" - twice

2023-05-11 Thread Kirk Wolak
On Wed, May 10, 2023 at 9:32 AM Evgeny Morozov <
postgres...@realityexists.net> wrote:

> On 10/05/2023 6:39 am, Kirk Wolak wrote:
>
> It could be as simple as creating temp tables in the other database (since
> I believe pg_class was hit).
>
> We do indeed create temp tables, both in other databases and in the ones
> being tested. (We also create non-temp tables there.)
>
>
> Also, not sure if the OP has a set of things done after he creates the DB
> that may help?
>
> Basically we read rows from the source database, create some partitions of
> tables in the target database, insert into a temp table there using BULK
> COPY, then using a regular INSERT copy from the temp tables to the new
> partitions.
>
>
> Now that the probem has been reproduced and understood by the PG
> developers, could anyone explain why PG crashed entirely with the "PANIC"
> error back in April when only specific databases were corrupted, not any
> global objects necesary for PG to run? And why did it not crash with the
> "PANIC" on this occasion?
>
I understand the question as:
Why would it PANIC on non-system data corruption, but not on system data
corruption?

To which my guess is:
Because System  Data Corruption, on startup is probably a use case, and we
want to report, and come up as much as possible.
Whereas the OTHER code did a PANIC simply because it was BOTH unexpected,
and NOT Where it was in a place it could move forward.
Meaning it had no idea if it read in bad data, or if it CREATED the bad
data.

As a programmer, you will find much more robust code on startup checking
than in the middle of doing something else.

But just a guess.  Someone deeper into the code might explain it better.
And you COULD go dig through the source to compare the origination of the
error messages?

Kirk...


Re: order by

2023-05-11 Thread Kirk Wolak
On Thu, May 11, 2023 at 11:30 AM Marc Millas  wrote:

> On Thu, May 11, 2023 at 5:23 PM Adrian Klaver 
> wrote:
>
>> On 5/11/23 08:00, Marc Millas wrote:
>> >
>> > On Thu, May 11, 2023 at 4:43 PM Adrian Klaver <
>> adrian.kla...@aklaver.com
>> > > wrote:
>> >
>> > On 5/11/23 07:29, Marc Millas wrote:
>> >  > Hi,
>>
>
please from psql do:
\l+  (That a lower case L)

on both databases.  I ran into this once because I had used the DEFAULT
COLLATION on one and a SPECIFIC Collation on the other machine.

That would explain it.

You set these things when you create the database.

Kirk...


Re: order by

2023-05-11 Thread Ron

On 5/11/23 09:29, Marc Millas wrote:

Hi,

I keep on investigating on the "death postgres" subject
but open a new thread as I don't know if it's related to my pb.

I have 2 different clusters, on 2 different machines, one is prod, the 
second test.

Same data volumes.

On prod if I do
select col_a, count(col_a) from table_a group by col_a order by col_a desc,
I get the numbers of NULL on top.
To get the number of NULL on top on the test db, I have to
select col_a, count(col_a) from table_a group by col_a order by col_a asc.


This doesn't answer your question, but: ORDER BY has NULLS { FIRST | LAST } 
options, so no need to completely change the sort order.


And this just confuses your question:
https://www.postgresql.org/docs/15/sql-select.html

If|NULLS LAST|is specified, null values sort after all non-null values; 
if|NULLS FIRST|is specified, null values sort before all non-null values. 
If neither is specified, the default behavior is*|NULLS 
LAST|**when**|ASC|is specified *or implied, and*|NULLS 
FIRST|**when**|DESC|is specified* (thus, the default is to act as 
though nulls are larger than non-nulls).



--
Born in Arizona, moved to Babylonia.

Re: order by

2023-05-11 Thread Ron

On 5/11/23 09:55, Marc Millas wrote:

Thanks,

I do know about index options.

that table have NO (zero) indexes.


If the table has no indices, then why did you write "it looks like there is 
something different within the *b-tree operator* class of varchar"?  After 
all, you only care about b-trees when you have b-tree indices.


--
Born in Arizona, moved to Babylonia.

Re: Death postgres

2023-05-11 Thread Marc Millas
On Thu, May 11, 2023 at 1:56 AM Peter J. Holzer  wrote:

> On 2023-05-10 22:52:47 +0200, Marc Millas wrote:
> > On Wed, May 10, 2023 at 7:24 PM Peter J. Holzer 
> wrote:
> >
> > On 2023-05-10 16:35:04 +0200, Marc Millas wrote:
> > >  Unique  (cost=72377463163.02..201012533981.80 rows=1021522829864
> width=
> > 97)
> > >->  Gather Merge  (cost=72377463163.02..195904919832.48 rows=
> > 1021522829864 width=97)
> > ...
> > >->  Parallel Hash Left Join  (cost=
> > 604502.76..1276224253.51 rows=204304565973 width=97)
> > >  Hash Cond: ((t1.col_ano)::text =
> (t2.col_ano)::text)
> > ...
> > >
> > > //so.. the planner guess that those 2 join will generate 1000
> billions
> > rows...
> >
> > Are some of the col_ano values very frequent? If say the value 42
> occurs
> > 1 million times in both table_a and table_b, the join will create 1
> > trillion rows for that value alone. That doesn't explain the crash
> or the
> > disk usage, but it would explain the crazy cost (and would probably
> be a
> > hint that this query is unlikely to finish in any reasonable time).
> >
> >
> > good guess, even if a bit surprising: there is one (and only one)
> "value" which
> > fit your supposition: NULL
>
> But NULL doesn't equal NULL, so that would result in only one row in the
> left join. So that's not it.
>


so, apo...

the 75 lines in each tables are not NULLs but '' empty varchar, which,
obviously is not the same thing.
and which perfectly generates 500 billions lines for the left join.
So, no planner or statistics pbs. apologies for the time wasted.
Back to the initial pb:
if, with temp_file_limit positioned to 210 GB, I try to run the select *
from table_a left join table_b on the col_a (which contains the 75 ''
on both tables)
then postgres do crash, killed by oom, after having taken 1.1 TB of
additional disk space.
the explain plan guess 512 planned partitions. (obviously, I cannot provide
an explain analyze...)

to my understanding, before postgres 13, hash aggregate did eat RAM
limitless in such circumstances.
but in 14.2 ??
(I know, 14.8 is up...)


> hp
>
> --
>_  | Peter J. Holzer| Story must make more sense than reality.
> |_|_) ||
> | |   | h...@hjp.at |-- Charles Stross, "Creative writing
> __/   | http://www.hjp.at/ |   challenge!"
>



Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com


Re: gather merge

2023-05-11 Thread Marc Millas
so, I put max_parallel_workers_per_gather to 0, and it does work, no more
parallel execution.

Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com



On Thu, May 11, 2023 at 4:38 PM Marc Millas  wrote:

> Hi,
>
> another new thread related to "death postgres":
> how to stop Gather Merge from going parallel ?
> ie. not forcing parallel to one thread as limitating max_parallel_workers
> (per_gatherer)
>
> thanks,
>
>
>
> Marc MILLAS
> Senior Architect
> +33607850334
> www.mokadb.com
>
>


Re: order by

2023-05-11 Thread Adrian Klaver

On 5/11/23 08:29, Marc Millas wrote:






So how is the data getting from the third database to the prod and test
clusters?

For the machines hosting the third db, the prod and test clusters
what are?:

should I understand that you suggest that the way the data is inserted 
Do change the behaviour of the ORDER BY clause ??


What I am saying is we need context. You are there and know what you are 
looking at and how it got there, we don't. At this point I don't know 
anything as I don't know the data operations involved.


So how did the data get from the third database to the others?

Context is also why the information to the below was requested.




OS

OS version

locale



Without solid information anything said is based on a good deal of 
assuming and we know where that leads.




 >
 >
 >     Postgres version for each cluster is?
 >     14.2

FYI, 14.8 has just been released so the clusters are behind by 6 bug
fix
releases.
Sadly.. I know.


-- 
Adrian Klaver

adrian.kla...@aklaver.com 



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: order by

2023-05-11 Thread Marc Millas
On Thu, May 11, 2023 at 5:23 PM Adrian Klaver 
wrote:

> On 5/11/23 08:00, Marc Millas wrote:
> >
> > On Thu, May 11, 2023 at 4:43 PM Adrian Klaver  > > wrote:
> >
> > On 5/11/23 07:29, Marc Millas wrote:
> >  > Hi,
> >  >
> >  > I keep on investigating on the "death postgres" subject
> >  > but open a new thread as I don't know if it's related to my pb.
> >  >
> >  > I have 2 different clusters, on 2 different machines, one is
> > prod, the
> >  > second test.
> >  > Same data volumes.
> >
> > How can they be sharing the same data 'volume'?
> >
> >  roughly: one table is 1308 lines and the second is 1310
> > lines, the data comes from yet another DB.
> >
> > those 2 tables have no indexes. they are used to build kind of
> > aggregates thru multiple left joins.
> >
> > Do you mean you are doing dump/restore between them?
> >
> > no
>
> So how is the data getting from the third database to the prod and test
> clusters?
>
> For the machines hosting the third db, the prod and test clusters what
> are?:
>

should I understand that you suggest that the way the data is inserted Do
change the behaviour of the ORDER BY clause ??

>
> OS
>
> OS version
>
> locale
>
>
> >
> >
> > Postgres version for each cluster is?
> > 14.2
>
> FYI, 14.8 has just been released so the clusters are behind by 6 bug fix
> releases.
> Sadly.. I know.
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: order by

2023-05-11 Thread Adrian Klaver

On 5/11/23 08:00, Marc Millas wrote:


On Thu, May 11, 2023 at 4:43 PM Adrian Klaver > wrote:


On 5/11/23 07:29, Marc Millas wrote:
 > Hi,
 >
 > I keep on investigating on the "death postgres" subject
 > but open a new thread as I don't know if it's related to my pb.
 >
 > I have 2 different clusters, on 2 different machines, one is
prod, the
 > second test.
 > Same data volumes.

How can they be sharing the same data 'volume'?

     roughly: one table is 1308 lines and the second is 1310 
lines, the data comes from yet another DB.


those 2 tables have no indexes. they are used to build kind of
aggregates thru multiple left joins.

Do you mean you are doing dump/restore between them?

no


So how is the data getting from the third database to the prod and test 
clusters?


For the machines hosting the third db, the prod and test clusters what are?:

OS

OS version

locale





Postgres version for each cluster is?
14.2


FYI, 14.8 has just been released so the clusters are behind by 6 bug fix 
releases.




--
Adrian Klaver
adrian.kla...@aklaver.com





Re: order by

2023-05-11 Thread Marc Millas
On Thu, May 11, 2023 at 4:43 PM Adrian Klaver 
wrote:

> On 5/11/23 07:29, Marc Millas wrote:
> > Hi,
> >
> > I keep on investigating on the "death postgres" subject
> > but open a new thread as I don't know if it's related to my pb.
> >
> > I have 2 different clusters, on 2 different machines, one is prod, the
> > second test.
> > Same data volumes.
>
> How can they be sharing the same data 'volume'?
>
roughly: one table is 1308 lines and the second is 1310 lines,
the data comes from yet another DB.

> those 2 tables have no indexes. they are used to build kind of aggregates
> thru multiple left joins.
>


> Do you mean you are doing dump/restore between them?
>
no

>
> Postgres version for each cluster is?
> 14.2
>



> >
> > On prod if I do
> > select col_a, count(col_a) from table_a group by col_a order by col_a
> desc,
> > I get the numbers of NULL on top.
> > To get the number of NULL on top on the test db, I have to
> > select col_a, count(col_a) from table_a group by col_a order by col_a
> asc.
> >
> > so, it looks like there is something different within the b-tree
> > operator class of varchar (?!?)
> > between those 2 clusters.
> >
> > What can I check to to explain this difference as, to my understanding,
> > it's not a postgresql.conf parameter.
> >
> > thanks
> >
> > Marc MILLAS
> > Senior Architect
> > +33607850334
> > www.mokadb.com 
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: order by

2023-05-11 Thread Marc Millas
Thanks,

I do know about index options.

that table have NO (zero) indexes.

Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com



On Thu, May 11, 2023 at 4:48 PM Adam Scott  wrote:

> Check the index creation has NULLS FIRST (or LAST) on both indexes that
> are used. Use explain to see what indexes are used
>
> See docs for create index:
> https://www.postgresql.org/docs/current/sql-createindex.html
>
> On Thu, May 11, 2023, 7:30 AM Marc Millas  wrote:
>
>> Hi,
>>
>> I keep on investigating on the "death postgres" subject
>> but open a new thread as I don't know if it's related to my pb.
>>
>> I have 2 different clusters, on 2 different machines, one is prod, the
>> second test.
>> Same data volumes.
>>
>> On prod if I do
>> select col_a, count(col_a) from table_a group by col_a order by col_a
>> desc,
>> I get the numbers of NULL on top.
>> To get the number of NULL on top on the test db, I have to
>> select col_a, count(col_a) from table_a group by col_a order by col_a asc.
>>
>> so, it looks like there is something different within the b-tree operator
>> class of varchar (?!?)
>> between those 2 clusters.
>>
>> What can I check to to explain this difference as, to my understanding,
>> it's not a postgresql.conf parameter.
>>
>> thanks
>>
>> Marc MILLAS
>> Senior Architect
>> +33607850334
>> www.mokadb.com
>>
>>


Re: order by

2023-05-11 Thread Adam Scott
Check the index creation has NULLS FIRST (or LAST) on both indexes that are
used. Use explain to see what indexes are used

See docs for create index:
https://www.postgresql.org/docs/current/sql-createindex.html

On Thu, May 11, 2023, 7:30 AM Marc Millas  wrote:

> Hi,
>
> I keep on investigating on the "death postgres" subject
> but open a new thread as I don't know if it's related to my pb.
>
> I have 2 different clusters, on 2 different machines, one is prod, the
> second test.
> Same data volumes.
>
> On prod if I do
> select col_a, count(col_a) from table_a group by col_a order by col_a
> desc,
> I get the numbers of NULL on top.
> To get the number of NULL on top on the test db, I have to
> select col_a, count(col_a) from table_a group by col_a order by col_a asc.
>
> so, it looks like there is something different within the b-tree operator
> class of varchar (?!?)
> between those 2 clusters.
>
> What can I check to to explain this difference as, to my understanding,
> it's not a postgresql.conf parameter.
>
> thanks
>
> Marc MILLAS
> Senior Architect
> +33607850334
> www.mokadb.com
>
>


Re: order by

2023-05-11 Thread Adrian Klaver

On 5/11/23 07:29, Marc Millas wrote:

Hi,

I keep on investigating on the "death postgres" subject
but open a new thread as I don't know if it's related to my pb.

I have 2 different clusters, on 2 different machines, one is prod, the 
second test.

Same data volumes.


How can they be sharing the same data 'volume'?

Do you mean you are doing dump/restore between them?

Postgres version for each cluster is?



On prod if I do
select col_a, count(col_a) from table_a group by col_a order by col_a desc,
I get the numbers of NULL on top.
To get the number of NULL on top on the test db, I have to
select col_a, count(col_a) from table_a group by col_a order by col_a asc.

so, it looks like there is something different within the b-tree 
operator class of varchar (?!?)

between those 2 clusters.

What can I check to to explain this difference as, to my understanding, 
it's not a postgresql.conf parameter.


thanks

Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com 



--
Adrian Klaver
adrian.kla...@aklaver.com





gather merge

2023-05-11 Thread Marc Millas
Hi,

another new thread related to "death postgres":
how to stop Gather Merge from going parallel ?
ie. not forcing parallel to one thread as limitating max_parallel_workers
(per_gatherer)

thanks,



Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com


order by

2023-05-11 Thread Marc Millas
Hi,

I keep on investigating on the "death postgres" subject
but open a new thread as I don't know if it's related to my pb.

I have 2 different clusters, on 2 different machines, one is prod, the
second test.
Same data volumes.

On prod if I do
select col_a, count(col_a) from table_a group by col_a order by col_a desc,
I get the numbers of NULL on top.
To get the number of NULL on top on the test db, I have to
select col_a, count(col_a) from table_a group by col_a order by col_a asc.

so, it looks like there is something different within the b-tree operator
class of varchar (?!?)
between those 2 clusters.

What can I check to to explain this difference as, to my understanding,
it's not a postgresql.conf parameter.

thanks

Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com


Re: Death postgres

2023-05-11 Thread Marc Millas
On Thu, May 11, 2023 at 1:56 AM Peter J. Holzer  wrote:

> On 2023-05-10 22:52:47 +0200, Marc Millas wrote:
> > On Wed, May 10, 2023 at 7:24 PM Peter J. Holzer 
> wrote:
> >
> > On 2023-05-10 16:35:04 +0200, Marc Millas wrote:
> > >  Unique  (cost=72377463163.02..201012533981.80 rows=1021522829864
> width=
> > 97)
> > >->  Gather Merge  (cost=72377463163.02..195904919832.48 rows=
> > 1021522829864 width=97)
> > ...
> > >->  Parallel Hash Left Join  (cost=
> > 604502.76..1276224253.51 rows=204304565973 width=97)
> > >  Hash Cond: ((t1.col_ano)::text =
> (t2.col_ano)::text)
> > ...
> > >
> > > //so.. the planner guess that those 2 join will generate 1000
> billions
> > rows...
> >
> > Are some of the col_ano values very frequent? If say the value 42
> occurs
> > 1 million times in both table_a and table_b, the join will create 1
> > trillion rows for that value alone. That doesn't explain the crash
> or the
> > disk usage, but it would explain the crazy cost (and would probably
> be a
> > hint that this query is unlikely to finish in any reasonable time).
> >
> >
> > good guess, even if a bit surprising: there is one (and only one)
> "value" which
> > fit your supposition: NULL
>
> But NULL doesn't equal NULL, so that would result in only one row in the
> left join. So that's not it.
>

if so... how ???

>
> hp
>
> --
>_  | Peter J. Holzer| Story must make more sense than reality.
> |_|_) ||
> | |   | h...@hjp.at |-- Charles Stross, "Creative writing
> __/   | http://www.hjp.at/ |   challenge!"
>


Re: [Beginner question]How to solve multiple definition of `yylval'?

2023-05-11 Thread Tom Lane
Wen Yi  writes:
> When I use the yacc & lex to compile,

> yacc -d 1-3.y
> lex 1-3.l
> gcc 1-3.tab.c lex.yy.c
> /usr/bin/ld: /tmp/ccYqqE5N.o:(.bss+0x28): multiple definition of `yylval'; 
> /tmp/ccdJ12gy.o:(.bss+0x4): first defined here

Bison provides the declaration of yylval; don't add one yourself.

BTW, this isn't right:

#define YYSTPYE string

first because you misspelled YYSTYPE, and second because you
evidently want yylval to be double not a string.

regards, tom lane




[Beginner question]How to solve multiple definition of `yylval'?

2023-05-11 Thread Wen Yi
Hi team,
I am studying on the yacc & lex to help me understand the parser of the 
postgres, but now I'm facing some trouble.

---
/*
  1-3.y
*/

%{
#include 
int yylex();
int yyerror(char *s);
typedef char* string;
#define YYSTPYE string
%}
%token NUMBER
%token ADD SUB MUL DIV ABS
%token NEWLINE

%%

calclist: /* Empty rule */
| calclist expression NEWLINE { printf (" = %lf", $2); }
;

expression: factor { $$ = $1; }
| expression ADD factor { $$ = $1 + $3; }
| expression SUB factor { $$ = $1 - $3; }
;

factor: term { $$ = $1; }
| factor MUL term { $$ = $1 * $3; }
| factor DIV term { $$ = $1 / $3; }
;

term: NUMBER  { $$ = $1; }
| ABS term { $$ = $2 > 0 ? $2 : - $2; }

%%

int main(int argc, char *argv[])
{
  yyparse();
}
int yyerror(char *s)
{
  puts(s);
}

-

/*
  1.3.l
Calc Program
  Jing Zhang
*/
%option noyywrap
%option noinput

%{
  #include 
  enum yyTokenType
  {
NUMBER = 258,
ADD,
SUB,
MUL,
DIV,
ABS,
NEWLINE
  };
  double yylval;
%}

%%

"+" { return ADD; }
"-" { return SUB; }
"*" { return MUL; }
"/" { return DIV; }
"|" { return ABS; }
[0-9]+ { yylval = atof (yytext); return NUMBER;}
\n { return NEWLINE; }
[ \t] { ; }
. { ; }
%%

---


When I use the yacc & lex to compile,

yacc -d 1-3.y
lex 1-3.l
gcc 1-3.tab.c lex.yy.c
/usr/bin/ld: /tmp/ccYqqE5N.o:(.bss+0x28): multiple definition of `yylval'; 
/tmp/ccdJ12gy.o:(.bss+0x4): first defined here

Can someone provide me a solution?
Thanks in advance!

Yours,
Jing Zhang.


Re: Materialized Views - Way to refresh automatically (Incrementaly)

2023-05-11 Thread Thomas Boussekey
Hello

Le jeu. 11 mai 2023, 12:46, FOUTE K. Jaurès  a
écrit :

> Hello Everyone,
>
> Is There a way on PostgresQL 14 to automatically increment a Materialized
> Views ?
>

Have a look at pg_ivm extension https://github.com/sraoss/pg_ivm

>
> --
> Jaurès FOUTE
>

Hope this helps,
Thomas

>
>


Materialized Views - Way to refresh automatically (Incrementaly)

2023-05-11 Thread FOUTE K . Jaurès
Hello Everyone,

Is There a way on PostgresQL 14 to automatically increment a Materialized
Views ?

-- 
Jaurès FOUTE