Baptiste LHOSTE wrote:
> We are still trying to fix our issue and we found following logs :
>
> 2013-01-17 09:55:01 CET LOG: automatic vacuum of table
> "flows.public.agg_t1213_incoming_a6_dst_port_and_proto_f5": index scans: 1
> pages: 0 removed, 136547 remain
> tuples: 0 removed, 40
Hi,
We are still trying to fix our issue and we found following logs :
2013-01-17 09:55:01 CET LOG: automatic vacuum of table
"flows.public.agg_t1213_incoming_a6_dst_port_and_proto_f5": index scans: 1
pages: 0 removed, 136547 remain
tuples: 0 removed, 4044679 remain
syst
Hi,
> Could you show that output you base that on?
EXPLAIN on table which was recently analyzed by the autovacuum process :
explain delete from agg_t1343_incoming_a3_src_net_and_dst_net_f5 where
start_date < 1353317127200;
Baptiste LHOSTE wrote:
> These queries are very simple : delete from table where
> start_date < availableTimestamp. We performed an EXPLAIN to try
> to understand what could be the problem. The query planner said
> that the index on start_date could not be used because it was not
> up-to-date.
Co
Hi,
> Thanks. I wasn't suggesting you increase the duration; I just
> wanted perspective on whether it could be the result of unusually
> long run times rather than blocking, and how severe that increase
> was known ot be.
> Thank you very much, With that much information we should be much
> bett
Baptiste LHOSTE wrote:
>> Just so we know how to interpret that, how many minutes, hours,
>> or days did you wait to see whether it would ever end?
>
> I have waiting for 15 minutes in this state. I can not wait more
> time without losing some data for our client.
Thanks. I wasn't suggesting you
Baptiste LHOSTE wrote:
>> Was the blocking you described occurring at the time you
>> captured this? It doesn't seem to be showing any problem.
>
> Yes indeed. We have noticed that any process seems to be in
> waiting situation but :
> - before the autovacuum process starts to work on the both k
> Was the blocking you described occurring at the time you captured
> this? It doesn't seem to be showing any problem.
Yes indeed. We have noticed that any process seems to be in waiting situation
but :
- before the autovacuum process starts to work on the both kind of tables,
truncate and ind
Baptiste LHOSTE wrote:
> Here's the pg_stat_activity during the issue :
> [no processes waiting]
> Here's the pg_locks during the issue :
> [all locks granted]
Was the blocking you described occurring at the time you captured
this? It doesn't seem to be showing any problem.
> Is there a way to
>> Would it be possible to update your 8.4 installation to the latest
>> bug fix (currently 8.4.15) to rule out the influence of any bugs
>> which have already been fixed?
> Is there a way to upgrade without having to dump all data and restore them
> after the upgrade ?
I have check but debian
> Would it be possible for you to create such a situation and capture
> the contents of pg_stat_activity and pg_locks while it is going on?
> What messages related to autovacuum or deadlocks do you see in the
> server log while this is going on?
Before the change we can only see only automatic a
Baptiste LHOSTE wrote:
> - finally we delete old data of the second kind of tables
> Then the autovacuum process starts to work on the second kind of
> tables, but our process blocks into step 3 (truncate) or step 5
> (create index).
>
> As soon as I reset the autovacuum thresholds for the seco
Hi everybody,
We are having issues with the autovacuum process.
Our database is composed by two kinds of tables :
- the first ones are partitions,
- the second ones are classic tables.
Each five minutes we execute the following process :
- we drop constraint of the target partition
- we drop
13 matches
Mail list logo