Hey
 At my company we run a fairly large(by our standard) website whit about 3000 
concurrent users, under normal circumstances this runs fine, there is between 
50 and 80 attachments and web requests are served in under 100 on average. H
 The traffic is "bursty" and at certain intervals eg. when class begins, we 
receive a lot requestes. These burst result in 100-200 attachments, but no 
significant change i web request duration.
 However every week or two we experience a slowdown where attachment count 
rises to between 600 and 1400 and web request takes over 20 seconds on average. 

 

 I found the FileSystemCacheSize(Windows only) that instruct firebird/Windows 
to only use a certain procent of memory for file cache, see CORE-3791. This 
made me think if something similar was happening on our Linux server. My 
thought, is that when these bursts, happens and 10, 20, 30 Gb of ram is 
consumed by the Super Classic process, that the OS evicts pages in a sub 
optimal way, and perhaps sorting is performed on disk. Once this happens its a 
negative spiral where queries get slower, and hence more attachments are 
created.  

 

 I would be very happy to hear from others that might have experienced similar 
issues, or if you have an idea.
 

 //Thomas Kragh 
 

 Information:
 Super Classic 2.5.7 on CentOs
 16 cores, 128 Gb
 Database size 82 Gb
 Page size 16K 
 

 Config(Different from default):
 LockMemSize = 5048576

 LockHashSlots = 30011

 TempCacheLimit = 4294967296

 TempBlockSize = 2048576

 DefaultDbCachePages = 1024

 

 Firebird is configured to allow 64K open files. 
  

Reply via email to