Re: Separate 100 M spatial data in 100 tables VS one big table

2024-03-06 Thread kimaidou
Hi ! I would like to thank you all for your detailed answers and explanations. I would give "partitioning" a try, by creating a dedicated new partition table, and insert a (big enough) extract of the source data in it. You are right, the best would be to try in real life ! Best wishes Kimaidou

Re: Optimizing count(), but Explain estimates wildly off

2024-03-06 Thread Chema
> > Yours will be different, as I cannot exactly duplicate your schema or data > distribution, but give "SELECT 1" a try. This was on Postgres 16, FWIW, > with a default_statistics_target of 100. > Select 1 produces a sequential scan, like Select * did before Vacuum Full. But if I force an index