>>> it's a disaster .... i disable the gfix -sweep every night and launch it
>>> only when the difference between oldest transaction and next transaction
>>> is>  20 000 but this become true only after 5 hours :( so at the end
>>> everyday the difference between oldest transaction and next transaction
>>> grow by more than 20 000 :(
>>>
>>> i can not run gfix everyday, it's took hours to finish :(
>>>
>>> what i can do ?
>>
>> Find the reason why you produce such a transaction gap within 5 hours.
>> Is it just the OIT which is behind or OAT as well? What's the output of
>> gstat -h?
>>
>
> ok, i think i found ... few hours ago the difference
> between OIT and next transaction was 50 000 but
> now i try again gstat -h and the difference if only 5 ...
>
> Database header page information:
>           Flags                   0
>           Checksum                12345
>           Generation              1003772708
>           Page size               8192
>           ODS version             11.2
>           Oldest transaction      1003732753
>           Oldest active           1003732754
>           Oldest snapshot         1003732754
>           Next transaction        1003732757
>           Bumped transaction      1
>           Sequence number         0
>           Next attachment ID      39942
>           Implementation ID       26
>           Shadow count            0
>           Page buffers            0
>           Next header page        0
>           Database dialect        3
>           Creation date           Oct 30, 2011 2:23:04
>           Attributes              force write, no reserve
>
>       Variable header data:
>           Sweep interval:         0
>           *END*
>
> i don't do any long transaction on the server but this is hard
> to know if one select or update was not long to return as lot of
> application are
> connected to the database, but all application use the same procedure
> to select or update the data :
>
> procedure doSQL
> begin
>     StartTransaction
>     Try
>       select or update data
>       committransaction
>     except
>       rollbacktransaction
>     end;
> end;
>
> for now the only explanation i see is that we do very lot of transaction
> (average 225 / Secondes is it lot ?)

That's quite a number, when running in 24x7 mode! Although due to the 
example above, it seems you do explicit transaction management. So, 
either you still use automcommit somewhere, which results in a new 
transaction for every piece of work, our you have a lot of connections 
and work to be done by explicit transactions.

I really think that you are using autocommit as well. The least 
expensive transactions are READ COMMIT READ ONLY, so basically make use 
of at least read-only transactions, when you are selecting stuff from 
the database and don't write.

I misunderstanding of how to work with transactions efficiently is 
probably the number one reason for server performance degradation and 
performance problems in general.

> on the server and this why 20000 sean to be too little ... with around
> 19 millions transactions by days
> i thing i must use much more than 20000 (1 million?)
>
> How much transaction by second can firebird can handle ?

It seems to "work" in your case, but at least there is a limit of 2^31 
transactions before you urgently have to run a backup/restore cycle, 
which ain't be fun with a 120GB database and more or less 24x7 requirements.

So, get used to transactions. Use them wise and efficiently. Separate 
read-ony and read/write transactions. Reaching the 1 billion limit in 
~52 days is quite a number!


-- 
With regards,
Thomas Steinmaurer (^TS^)
Firebird Technology Evangelist

http://www.upscene.com/
http://www.firebirdsql.org/en/firebird-foundation/

Reply via email to