, although, since the info is not in the
thread it is not certain that relpages was large enough to hit the analyze
issue. Unless that was with the old default statistics target in which case
it would be pretty easy to hit.
-dg
--
David Gould da...@sonic.net
If s
On Tue, 13 Mar 2018 11:29:03 -0400
Tom Lane wrote:
> David Gould writes:
> > I have attached the patch we are currently using. It applies to 9.6.8. I
> > have versions for older releases in 9.4, 9.5, 9.6. I fails to apply to 10,
> > and presumably head but I can upda
all this my vote is for back patching too. I don't see any case where
the patched analyze is or could be worse than what we are doing. I'm happy to
provide my test cases if anyone is interested.
Thanks
-dg
--
David Gould da...@sonic.net
If simplicity worked, the world would be overrun with insects.
h in the current commit fest that allows prioritizing tables
manually. I don't like that idea much, but it does recognize that the current
scheme is not adequate for databases with large numbers of tables.
-dg
--
David Gould da...@sonic.net
If simplicity w
ork on.
With large numbers of tables and many autovacuum workers this is a significant
extra workload.
-dg
--
David Gould da...@sonic.net
If simplicity worked, the world would be overrun with insects.
hips, but doesn't want to maintain
their own fork forever. That said, if it does get though beta I'd hope we
could back-patch at that time.
-dg
--
David Gould da...@sonic.net
If simplicity worked, the world would be overrun with insects.
means. VACUUM seems to be thinking that
>
> reltuples = live + dead
>
> while ANALYZE apparently believes that
>
> reltuples = live
There is a patch for this one from Tomas Vondra/Tom Lane that I hope it will
land in the next set of releases.
-dg
--
David Gould
ably correct for
VACUUM. But not for ANALYZE. Which is actually hinted at in the original
comments but not in the code.
-dg
--
David Gould da...@sonic.net
If simplicity worked, the world would be overrun with insects.
On Sun, 4 Mar 2018 07:49:46 -0800
Jeff Janes wrote:
> On Wed, Jan 17, 2018 at 4:49 PM, David Gould wrote:
> > # analyze verbose pg_attribute;
> > INFO: "pg_attribute": scanned 3 of 24519424 pages, containing 6475
> > live rows and 83 dead rows; 6475 rows
m sample I think we really can rely on the results of
extrapolating reltuples from analyze.
-dg
--
David Gould da...@sonic.net
If simplicity worked, the world would be overrun with insects.
does not scan do not change density then the
vacuum calculation does the right thing.
However, for ANALYZE the case is different.
-dg
--
David Gould da...@sonic.net
If simplicity worked, the world would be overrun with insects.
eshold of
0.2 * 1959918155 = 391983631 about 3.7 times larger than the actual row count.
The submitted patch is makes analyze effective in setting reltuples to within
a few percent of the count(*) value.
-dg
--
David Gould da...@sonic.net
If simplicity worked, the world would be overrun with insects.
anned 3 of 1538462 pages, containing 390745 live rows
and 1559255 dead rows;
--3 rows in sample, 20038211 estimated total rows
select reltuples::int, relpages from pg_class where relname = 'big';
-- reltuples | relpages
-- 20038212 | 1538462
-dg
--
David Gould da...@sonic.net
If simplicity worked, the world would be overrun with insects.
cuum to simply ignore tables forever
seems worth fixing.
-dg
--
David Gould da...@sonic.net
If simplicity worked, the world would be overrun with insects.
analyze_counts.awk
Description: application/awk
Please add the attached patch and this discussion to the open commit fest. The
original bugs thread is here: 2018011254.1408.8...@wrigleys.postgresql.org.
Bug reference: 15005
Logged by: David Gould
Email address: da...@sonic.net
PostgreSQL version: 10.1 and earlier
othing built-in. You would have to use a separate external storage
> mechanism of some kind. It could be a file, separately managed shared
> memory, or perhaps something like memcached.
Or, given that you have a postgres connection, you could use a table.
-dg
--
David Gould
16 matches
Mail list logo