Gregory Stark wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
Tom Lane wrote:
Looking at the autovacuum log output,
2007-11-13 09:21:19.830 PST 9458 LOG: automatic vacuum of table
specdb.public.txn_log_table: index scans: 1
pages: 11 removed, 105 remain
Heikki Linnakangas [EMAIL PROTECTED] writes:
Tom Lane wrote:
Looking at the autovacuum log output,
2007-11-13 09:21:19.830 PST 9458 LOG: automatic vacuum of table
specdb.public.txn_log_table: index scans: 1
pages: 11 removed, 105 remain
tuples: 3147 removed, 40 remain
Jignesh K. Shah wrote:
Since its really writes that I am having trouble.. the auto vacuum
message tells me 11 pages were removed and so many tuples were
removed.. I am guessing its writes.
Do you keep track of I/O to WAL and data separately? WAL bandwidth will
spike up when a checkpoint
Tom Lane wrote:
Looking at the autovacuum log output,
2007-11-13 09:21:19.830 PST 9458 LOG: automatic vacuum of table
specdb.public.txn_log_table: index scans: 1
pages: 11 removed, 105 remain
tuples: 3147 removed, 40 remain
system usage: CPU 0.11s/0.09u sec elapsed
Yes I separate out as follows:
PGDATA + 1 TABLE which needs to be cached (also workaround CLOG read
problem)
LOGS
DATABASE TABLES
DATABASE INDEX
to get a good view of IOs out
I have full_page_writes=off in my settings
I dont see spikes of increase on WAL during checkpoints (maybe due to my
Tom Lane wrote:
2007-11-13 09:21:19.830 PST 9458 LOG: automatic vacuum of table
specdb.public.txn_log_table: index scans: 1
pages: 11 removed, 105 remain
tuples: 3147 removed, 40 remain
system usage: CPU 0.11s/0.09u sec elapsed 6.02 sec
it seems like a
Jignesh K. Shah [EMAIL PROTECTED] writes:
I will turn on checkpoint_logging to get more idea as Heikki suggested
Did you find out anything?
Did this happen on every checkpoint, or only some of them? The bug
Itagaki-san pointed out today in IsCheckpointOnSchedule might account
for some
Alvaro Herrera [EMAIL PROTECTED] writes:
Tom Lane wrote:
it seems like a serious omission that this gives you no hint how many
pages were scanned.
Hmm, right. I'm not sure how to fix it; the simplest idea is to count
the number of heap page accesses in lazy_scan_heap, but this wouldn't
Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
Tom Lane wrote:
it seems like a serious omission that this gives you no hint how many
pages were scanned.
Too complex for my taste, anyway. I would be satisfied if the log
entries just indicated how big the table and indexes
Jignesh K. Shah [EMAIL PROTECTED] wrote:
I am running tests with PG8.3b2 on Solaris 10 8/07 and I still see IO
flood when checkpoint happens.
Are there any i/o tuning knobs in Solaris? LDC in 8.3 expects writing
activity in kernel is strong enough to keep dirty pages in kernel
in a small
I was waiting to digest what I saw before sending it to the group
I am running EAStress workload
I am using odata_sync which should sync as soon as it is written
with checkpoint_completion_target=0.9 and checkpoint_time=5m it seems to
be doing the right thing from the logfile output
Jignesh K. Shah [EMAIL PROTECTED] writes:
So from the PostgreSQL view things are doing fine based on outputs: I
need to figure out the Solaris view on it now.
Could it be related to autovacuum happening also?
Maybe ... have you tried fiddling with the vacuum_cost_delay options?
Looking at
I dont understand vacuum a lot.. I admit I am stupid :-)
When you say scanned... do you mean reads or do you mean writes?
Since its really writes that I am having trouble.. the auto vacuum
message tells me 11 pages were removed and so many tuples were
removed.. I am guessing its writes.
I
Hello,
I am running tests with PG8.3b2 on Solaris 10 8/07 and I still see IO
flood when checkpoint happens.
I have tried increasing the bg_lru_multiplier from 2 to 5 from default
but I dont see any more writes by bgwriter happening than my previous
test which used the default.
Then I tried
Jignesh K. Shah wrote:
I am running tests with PG8.3b2 on Solaris 10 8/07 and I still see IO
flood when checkpoint happens.
I have tried increasing the bg_lru_multiplier from 2 to 5 from default
but I dont see any more writes by bgwriter happening than my previous
test which used the default.
Jignesh K. Shah [EMAIL PROTECTED] writes:
I am running tests with PG8.3b2 on Solaris 10 8/07 and I still see IO
flood when checkpoint happens.
I am thinking that you are probably trying to test that by issuing
manual CHECKPOINT commands. A manual checkpoint is still done at full
speed, as are
I am running EAStress workload.. which doesnt do manual checkpoints as
far as I know..
I will turn on checkpoint_logging to get more idea as Heikki suggested
thanks.
-Jignesh
Tom Lane wrote:
Jignesh K. Shah [EMAIL PROTECTED] writes:
I am running tests with PG8.3b2 on Solaris 10 8/07 and
On Tue, 13 Nov 2007, Jignesh K. Shah wrote:
I have tried increasing the bg_lru_multiplier from 2 to 5 from default but I
dont see any more writes by bgwriter happening than my previous test which
used the default.
The multiplier only impacts writes being done by the LRU eviction
mechanism;
18 matches
Mail list logo