mpling rows, but my data is not terribly random, so
it gets fooled?)
[And here's the remaining question in my puzzled mind: ANALYZE would not
change the reltuples value, but VACUUM FULL ANALYZE did. Er-wha?]
--
Jeff Boes Vox 269-226-9550 x24
Director of
D Kavan wrote:
> There are 66 project id's in the project table and 3 rows in the
> project_members table for global_id 2915, but it only returns 3. I
> would think it should return 66 rows, with 63 of them having a null
> pm.project_id.
>
> SELECT
> p.project_id, pm.project_id
> FRO
On behalf of a previous employer who, due to major downsizing, is left
without an answer to this:
What's the best way to treat impending OID wrap-around in a 7.4.8
database? This DB has been online for about 18 months, and is expected
to hit the dreaded wrap-around in about a month. At an applicat
[EMAIL PROTECTED] wrote:
> Is there a way to do a selective pg_dump of a table using a select
> where clause? As in ... dump all records where now-col_date < 2 or
> col_date='01-may-2006' etc.. ?
You can transfer the subset of data to another table, e.g.,
create table my_dump as select * from tabl
rompt) without undoing its progress?
--
Jeff Boes vox 269.226.9550 ext 24
Database Engineer fax 269.349.9076
Nexcerpt, Inc. http://www.nexcerpt.com
...Nexcerpt... Extend your Expe
difficulty
in tracking down which process was doing what at the time.
--
Jeff Boes vox 269.226.9550 ext 24
Database Engineer fax 269.349.9076
Nexcerpt, Inc. http://www.nexcerpt.com
Tom Lane wrote:
Jeff Boes <[EMAIL PROTECTED]> writes:
We are experiencing the following error, usually during our nightly
delete-and-vacuum cycle (when there are very few other connections to
the database):
2003-10-30 01:36:59 [25392] LOG: server process (pid 697) was
term
SIGALRM is
used, so I'll start looking there. But if you or anyone else thinks of
anything, let me know ...
[How would a plperl function that changes the local behavior of SIGALRM
affect the backend?]
--
Jeff Boes vox 269.226.9550 ext 24
Database Eng
Our largest (by total byte count) table has a nearly-equal flow of data in and
out on a daily basis (that is, we add some 40k rows during the day, and then
every night expire stuff that is timestamped beyond our cutoff, which results in
about 40k deletions).
After the deletions, the table gets vac
Ah, I should have added: we are using PG 7.3.4.
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
At some point in time, [EMAIL PROTECTED] (Tom Lane) wrote:
>Jeff Boes <[EMAIL PROTECTED]> writes:
>> The "Keep" and "UnUsed" numbers seem high to me, compared to other tables. Can
>> anyone interpret these and tell me anything about what we could do wi
What causes the contents of pg_listener to "linger" long past the lifespan of
the associated database backend? My table has rows that I know must be days old.
--
| Genius may have its limitations, but stupidity is not
Jeff Boes | thus handicapped.
[EMAIL
Tom Lane wrote:
Jeff Boes <[EMAIL PROTECTED]> writes:
What causes the contents of pg_listener to "linger" long past the lifespan of
the associated database backend? My table has rows that I know must be days old.
PG version?
7.3.4
In any case, if there is a row for a
It appears that pg_autovacuum collected the name of a temp table, and
later tried to analyze it. The table was gone by then, and this caused
the daemon to exit. As this happened on a Sunday morning, my weekend
experiment to see how pg_autovacuum would maintain our
it to
get the large object OID after it is created.
I don't know if I have permission to post or email the script, but if you
contact me offline I should know by then.
jboes at n/e/x/c/e/r/p/t/d/o/t/c/o/m
--
| Genius may have its limitations, but stupidity is
long). I don't see anything that will allow me to do that (other
than dropping and recreating the table).
--
Jeff Boes vox 269.226.9550 ext 24
Database Engineer fax 269.349.9076
Nexcerpt, Inc.
file "/usr/local/pgsql/data/pg_xlog/004100EE" (log
file 65, segment 238): No such file or directory
(There are other problems that we are still working to correct, but this is the
one that I don't understand.)
--
| Genius may have its limitations, but stupid
h the same issue as having a sequence
number on a table, but if I'm interpreting all this correctly, the OID
wrap-around is going to occur a lot sooner than my table sequence number
wrap-around.)
--
Jeff Boes vox 269.226.9550 ext 24
Databas
We had a puzzling situation occur last weekend. Subsequently, I figured out how
to work around it for now, but of course those who sign my checks want to know
how we can nail down forever the possibility that something like that will ever
happen again ...
The OID value for large objects crossed th
We have a number of tables from which we expire (delete) rows nightly.
Is there any value to ordering the operations, in terms of either table
physical organization or elapsed time?
E.g.,
DELETE FROM foo WHERE date_expires < now();
VACUUM ANALYZE foo;
CLUSTER foo;
REINDEX TABLE foo;
How
ion of the vacuumdb command's execution?
--
Jeff Boes vox 269.226.9550 ext 24
Database Engineer fax 269.349.9076
Nexcerpt, Inc. http://www.nexcerpt.com
...Nexcerpt... Extend
Does anyone have a master list of ALTER TABLE workarounds? That is, ways
to manipulate the various pg_* tables to emulate 'missing' ALTER TABLE
commands like
ALTER TABLE DROP CONSTRAINT
ALTER TABLE DROP COLUMN
etc.
--
Jeff Boes vox 61
In article <9rc24d$170k$[EMAIL PROTECTED]>, "Jeff Boes" <[EMAIL PROTECTED]>
wrote:
> Does anyone have a master list of ALTER TABLE workarounds? That is,
> ways to manipulate the various pg_* tables to emulate 'missing' ALTER
> TABLE commands like
&g
e about because of the increase in WAL count?
--
Jeff Boes vox 616.226.9550
Database Engineer fax 616.349.9076
Nexcerpt, Inc. [EMAIL PROTECTED]
---(e
In article <9sphq7$k0h$[EMAIL PROTECTED]>, "Jeff Boes" <[EMAIL PROTECTED]>
wrote:
> and 2) very long COMMIT times for some long transactions: I'm talking
> about upwards of 10-20 MINUTES to commit after doing hundreds of
> inserts, updates and deletes in one
(<14) is not a
stable configuration. If you are running an old kernel you may wish
to upgrade soon.
--
Jeff Boes vox 616.226.9550
Database Engineer fax 616.349.9076
Nexcerpt, Inc.
Does anyone have a pointer to info on what kinds of things cause entries
in pg_trigger? We're trying to understand why some tables have a
half-dozen corresponding entries, and some have 30.
--
Jeff Boes vox 616.226.9550
Database Eng
o? Is there an incompatibility in the binaries from the two
sources of RPMs?
--
Jeff Boes vox 616.226.9550
Database Engineer fax 616.349.9076
Nexcerpt, Inc. [
e get 130K or more of these in a day. Are
these filling up (and possibly messing up) our WALs? We have experienced
a few shared-memory corruption errors, and a full-blown corruption
(although of a different table).
--
Jeff Boes vox 616.226.9550
Databas
In article <[EMAIL PROTECTED]>,
"Stephan Szabo" <[EMAIL PROTECTED]> wrote:
> How many rows are in the table? Have you run vacuum analyze?
Sorry, that information was in the original post, but perhaps you missed
it:
In article <9shhnf$23ks$[EMAIL PROTECTED]>
ive 10K of data).
--
Jeff Boes vox 616.226.9550
Database Engineer fax 616.349.9076
Nexcerpt, Inc. [EMAIL PROTECTED]
---(end of broadcast)-
Our database has about 70 tables. Only a half-dozen or so have more
than 100K rows, and all of these change (99% inserts, 10K-50K rows)
during an average day.
VACUUM ANALYZE takes over an hour, and it's edging up by a couple
minutes per day.
Is there a way to look at the DEBUG stats from the VA
me other areas. Even then, how often is just often enough? Is it
different in 7.1 and 7.2?
--
Jeff Boes vox 616.226.9550
Database Engineer fax 616.349.9076
Nexcerpt, Inc.
g log files
take less time than 7.1's approach of deleting and re-creating? How can
I determine if I have enough WAL files allocated? Should we consider
moving the pg_xlog directory to another physical disk? (At present it
points to a partition on a RAID-striped group of disks.)
--
e have an Apache
mod_perl installation running queries against these tables; could an open
read-only transaction cause problems like this?
--
Jeff Boes vox 616.226.9550 ext 24
Database Engineer fax 6
On Mon, 14 Oct 2002 11:13:46 -0400, Jeff Boes wrote:
> ERROR: cannot open segment 1 of relation table_D (target block
> 2337538109): No such file or directory
>
> and for table B:
>
> NOTICE: Child itemid in update-chain marked as unused - can't continue
> repair_
table for use or at least as a starting point.
*I haven't heard yet whether I may, so don't get your hopes up. But feel
free to flood me with email so I have leverage ...
--
Jeff Boes vox 616.226.9550 ext 24
Database Engineer
ELECT * FROM cities;
Otherwise you could also do:
\t
SELECT 'Now displaying cities...';
\t
SELECT * FROM cities;
--
Jeff Boes vox 616.226.9550 ext 24
Database Engineer fax 616.349.9076
Nexcerpt,
We have a few tables that seem to have suffered some kind of
corruption in their TOAST files. Symptoms include:
+ Runaway CPU usage during VACUUM ANALYZE (load average
spikes of 20x normal for 10-15 minutes at a time) and
more rarely during other operations
+ Recurring messages of "ERRO
On Tue, 2002-10-22 at 17:02, Tom Lane wrote:
> [EMAIL PROTECTED] (Jeff Boes) writes:
> > We have a few tables that seem to have suffered some kind of
> > corruption in their TOAST files. Symptoms include:
>
> > + Runaway CPU usage during VACUUM ANALYZE (load average
>
e PostgreSQL equivalents. Thanks
In the HTML documentation possibly installed on your system when
PostgreSQL was installed. Failing there, it's found here:
http://www.postgresql.org/idocs/index.php?catalogs.html
--
Jeff Boes vox 616.226.9550 ext 24
stat_fetch;
ANALYZE
(Just to make sure it's not a one-time thing)
# select relname, reltuples from pg_class
# where relname = 'stat_fetch';
relname | reltuples
+---
stat_fetch | 3422
(1 row)
(I give up, I need a strong beverage of some sort ...)
--
gt; mailing lists but I wanted to make sure I haven't missed anything.
>
Probably too late for the OP, but I've noted here before that the current
version of DBD::Pg (Perl driver used by the DBI) doesn't work under 7.3
(the DBD::Pg code uses fields from an interal pg_* table that
resql.org/project/pgavd/projdisplay.php
I started to, but haven't had the time to really dig into it.
--
Jeff Boes vox 616.226.9550 ext 24
Database Engineer fax 616.349.9076
Nexcerpt, Inc.
ll tables, which exist only to provide references for foreign
keys -- "code tables").
--
Jeff Boes vox 616.226.9550 ext 24
Database Engineer fax 616.349.9076
Nexcerpt, Inc. http:/
y performance differences whether you
VACUUM FULL or not?
--
Jeff Boes vox 269.226.9550 ext 24
Database Engineer fax 269.349.9076
Nexcerpt, Inc. http://www.nexcerpt.com
...Nexcerpt..
an some info from the postmaster log? For example, I think I
know that
ReceiveSharedInvalidMessages: cache state reset
has something to do with this "idle in transaction" state. Could the
incidence of these be used as a measurement?
--
Jeff Boes v
ould it be theoretically faster to do
REINDEX TABLE "foo";
or
REINDEX INDEX "ix_foo_one";
and
REINDEX INDEX "ix_foo_two";
in another session? (The system in question has two CPUs, if that makes
any difference.)
--
Jeff Boes
On Thu, 2003-06-12 at 12:42, Tom Lane wrote:
> Jeff Boes <[EMAIL PROTECTED]> writes:
> > For large (>1 million rows) tables
> > which have a pretty high turn-over (average life span of a row is 3
> > days), should there be any query performance differences whet
er uses a sequential scan; after, and it
uses the index.
And of course it stays at that point, even if another 10,000 rows get
inserted with current timestamps, until it's ANALYZEd again. So two or
three ANALYZEs per hour is not excessive, if it will keep the index
usab
>One action we have consiously not done is "REINDEX" on the
>table. We want to avoid that as far as possible.
Why? It's usually a very painless step, unless the table has millions and
millions of rows. We reindex tables with multiple indexes and several million
rows on a weekly basis. The only dow
me to time (in our
application, probably a couple dozen times a day)?
And, absent pat answers to this, is there anything out there in PG-land
that documents partial indexes, and when to use them?
--
Jeff Boes vox 269.226.9550 ext 24
Database Enginee
52 matches
Mail list logo