Re: [firebird-support] [FB 2.1] Firebird engine seems to slow down on high load without utilizing hardware

2016-04-11 Thread Thomas Steinmaurer t...@iblogmanager.com [firebird-support]
Patrick,

> recently we had some strange performance issues with our Firebird DB server.
>
> On high load, our server started to slow down. Select and update SQL
> query times did go up by more than 500% on average,
>
> but reaching unreasonable high execution times at worst case. (several
> minutes instead of < 1sec)
>
>
> OIT/OAT/Next Transaction statistics was within 1000 the hole time
>
> We were not able to messure any hardware limiting factor. Indeed, this
> system was running with only 8 cores at about 70% CPU usage on max. load.
>
> We decided that this may be our problem since we experienced a similar
> problem at about 80% CPU load in the past.
>
> So we upgraded the hardware. As expected, the CPU-load dropped to ~35%
> usage on max. load scenario.
>
> But this did not solve the problem.
>
> Same story for the harddisk system. The usage is not even near it's max
> capacity.
>
>
> We also can't see any impact on the harddisk.
>
> We'r kind of stuck with our ideas, because we have no idea what could be
> a potential bottleneck to the system.
>
> Since the hardware doesn't show a limit, there have to be anything else
> - most likely firebird engine related that's limiting our system.
>
> We would be very grateful if anyone can give us hints where we can
> search further.
>
> Or someone has similar experiences to share with us.
>
>
>
> Operating System: Windows Server 2003
>
> Firebird: 2.1.5 Classic
>
> Dedicated database server (VMWare)
>
>
> CPU: 16 cores, each 2.4 GHz
>
> RAM: 32 GB
>
> About 14GB are used from OS and firebird processes under max load.
>
> HDD: SAN Storage System
>
>
> Average system IOPS under load read: 100
>
> Average system IOPS under load write: 550
>
> Backup Restore IOPS read: 1700
>
> Backup Restore IOPS write: 250
>
> SAN IPOS Limit (max): 3000
>
>
> Firebird Config Settings, based on defaults
>
> DefaultDbCachePages = 1024
>
> LockMemSize = 134247728
>
> LockHashSlots = 20011
>
> Database
>
> size: about 45 GB
>
> 450 to 550 concurrent connections
>
> Daily average of 65 transactions / second (peak should be higher)
>
>
> FB_LOCK_PRINT (without any params) while system was slowing down (~4
> days uptime).
>
> I have to note, Firebird was not able to print the complete output
> (stats was not cropped by me)
>
>
> LOCK_HEADER BLOCK
>
> Version: 16, Active owner:  0, Length: 134247728, Used: 82169316
>
> Semmask: 0x0, Flags: 0x0001
>
> Enqs: 4211018659, Converts: 10050437, Rejects: 9115488, Blocks: 105409192
>
> Deadlock scans:   1049, Deadlocks:  0, Scan interval:  10
>
> Acquires: 4723416170, Acquire blocks: 640857597, Spin count:   0
>
> Mutex wait: 13.6%
>
> Hash slots: 15077, Hash lengths (min/avg/max):3/  12/  25
>
> Remove node:  0, Insert queue: 36, Insert prior: 74815332
>
> Owners (456):forward: 131316, backward: 14899392
>
> Free owners (9):forward: 39711576, backward: 49867232
>
> Free locks (42409):forward: 65924212, backward: 23319052

fb_lock_print is reporting a hash slots value of 15077 but you show a 
setting of 20011. Mutex wait looks high to me.

Some ideas:
* Increase the hash slots value to 30011
* Get a picture on how many garbage (record versions) you create. AFAIR 
it is the -r switch of gstat which gives you that information. Sudden 
spikes in the statement response time could be related to co-operative 
garbage collection in Classic/SuperClassic, where basically the 
statement synchronously removes garbage of out-dated record versions
* Consider upgrading to 2.5. 2.1.7 is end-of-life and 2.5 improved in 
the area of lock contention in Classic/SuperClassic substantially.


-- 
With regards,
Thomas Steinmaurer
http://www.upscene.com/

Professional Tools and Services for Firebird
FB TraceManager, IB LogManager, Database Health Check, Tuning etc.


Re: [firebird-support] Re: Composite index - issue or not existing feature?

2016-04-11 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]
On Mon, Apr 11, 2016 at 2:26 PM, liviusliv...@poczta.onet.pl
[firebird-support]  wrote:

> >>No, the index entry does not include transaction information.  So yes, if
> >>you have a database that includes something
> >>m to m relationship, the junctions records have to be read to validate
> >>them for the current transaction.


Even if you have a record with data that entirely in the index, like a pair
of
primary key field to other table, Firebird has to check record after it
identifies
it in the index because that record may not be visible to your
transaction.  It
may have been inserted by a concurrent transaction, deleted by a committed
transaction, or modified ... either way.  To the values you're looking for
by a
concurrent transaction or from those values by a committed transaction.

M to m
> >>relationships are typically something like: students, registrations, and
> >>courses.  Each student registers for several
> >>courses and each course has many students and the registration record
> >>consists only of a student id and a course
> >>number (it's a junction record),  Firebird has to read the registration
> >>records.  This was done  knowingly because
> >>keeping transaction information would greatly increase the size of index
> >>entries - one transaction id for the transaction
> >>that created the record version with the value sought, plus one for the
> >>transaction that changed the value.  Obviously
> >>that also increases the amount of I/O because modifying a key value
> >>modifies two index entries.  A fully mature
> >>record (only one version) could skip both transaction ids, at the cost of
> >>even more I/O.
>
> >>At one time, Firebird kept only one index entry for each value of a key
> >>for a record so if you typically modified a value
> >>between A and B, then back to A, then back to B, you'd only have two
> index
> >>entries for that record.   That optimization
> >>makes index garbage collection trickier and I think it was abandoned.  If
> >>it wasn't, obviously you'd need to have
> >>separate index entries for each instance of a value in a record version
> >>chain to keep track of transaction ids.
>
> >>It would be possible, I guess, to add a second index type that does keep
> >>transaction identifications so a designer
> >>would be able to optimize indexes for junction records.  That's easy for
> >>me to say because I don't do that kind of
> >>work anymore.
>
>
> Hi Ann and others,
>
> i read this once again and i see that not all is clear to me.
> If i modify record without modifing fields with indexes - then i suppose
> index entry also must be modified
>

No.

because it contain dbKey.


Ah!  The DBKey for a given record is constant.  It has three parts.  The
first two identify a page in the database, going through a table of pages
of page pointers, then a pointer page.  The third part of the DBKey
identifies
an entry in the page index - an array of offset length pairs stored on the
page that indicate the start and size of records on the page.

When a record is stored it gets a DBKey.  That's the DBKey for the
primary version of the record.  As records are modified, the primary
record version is always the most recent and points backward to
older versions.

When the record is modified,  Firebird checks for space on the same page.
If there's enough space Firebird stores the new version on the page,creating
a new DBKey for the old version, which is stored as as a back pointer in
the
new record version.  It sets the original DBKey to point to the new
version.
If there isn't space on the original page for both record versions there's
a
more complicated dance, but the result in the end is that the new version
has the old DBKey and points backward to the old version.

Index entries for the old and new versions of a record all contain the DBKey
of the primary (newest) record version.  When doing an indexed lookup,
Firebird walks backward through the record versions until it finds one that
the current transaction can see, then check whether the key values in that
record version match the lookup values.


And if i update record then we got new record
> version with new dbKey (once for transaction).
>

Nope.


> If this is true then i see that only benefit of current index is size of
> whole index because it not contain transaction id
> but I/O cost is not reduced.
>

If all versions of a record have the same values for all key fields, the
indexes are not touched during modifications.

Good luck,

Ann


[firebird-support] [FB 2.1] Firebird engine seems to slow down on high load without utilizing hardware

2016-04-11 Thread thetr...@yahoo.com [firebird-support]
hi,
 recently we had some strange performance issues with our Firebird DB server.
 On high load, our server started to slow down. Select and update SQL query 
times did go up by more than 500% on average,
 but reaching unreasonable high execution times at worst case. (several minutes 
instead of < 1sec)
 

 OIT/OAT/Next Transaction statistics was within 1000 the hole time
 We were not able to messure any hardware limiting factor. Indeed, this system 
was running with only 8 cores at about 70% CPU usage on max. load.
 We decided that this may be our problem since we experienced a similar problem 
at about 80% CPU load in the past.
 So we upgraded the hardware. As expected, the CPU-load dropped to ~35% usage 
on max. load scenario.
 But this did not solve the problem.
 Same story for the harddisk system. The usage is not even near it's max 
capacity.
 

 We also can't see any impact on the harddisk.
 We'r kind of stuck with our ideas, because we have no idea what could be a 
potential bottleneck to the system.
 Since the hardware doesn't show a limit, there have to be anything else - most 
likely firebird engine related that's limiting our system.
 We would be very grateful if anyone can give us hints where we can search 
further.
 Or someone has similar experiences to share with us.
 

 

 Operating System: Windows Server 2003
 Firebird: 2.1.5 Classic
 Dedicated database server (VMWare)
 

 CPU: 16 cores, each 2.4 GHz
 RAM: 32 GB
 About 14GB are used from OS and firebird processes under max load.
 
 HDD: SAN Storage System
 

 Average system IOPS under load read: 100
 Average system IOPS under load write: 550
 Backup Restore IOPS read: 1700
 Backup Restore IOPS write: 250
 SAN IPOS Limit (max): 3000
 

 Firebird Config Settings, based on defaults
 DefaultDbCachePages = 1024
 LockMemSize = 134247728
 LockHashSlots = 20011
 
 Database
 size: about 45 GB
 450 to 550 concurrent connections
 Daily average of 65 transactions / second (peak should be higher)
 

 FB_LOCK_PRINT (without any params) while system was slowing down (~4 days 
uptime).
 I have to note, Firebird was not able to print the complete output (stats was 
not cropped by me)
 

 LOCK_HEADER BLOCK
 Version: 16, Active owner:  0, Length: 134247728, Used: 82169316
 Semmask: 0x0, Flags: 0x0001
 Enqs: 4211018659, Converts: 10050437, Rejects: 9115488, Blocks: 105409192
 Deadlock scans:   1049, Deadlocks:  0, Scan interval:  10
 Acquires: 4723416170, Acquire blocks: 640857597, Spin count:   0
 Mutex wait: 13.6%
 Hash slots: 15077, Hash lengths (min/avg/max):3/  12/  25
 Remove node:  0, Insert queue: 36, Insert prior: 74815332
 Owners (456): forward: 131316, backward: 14899392
 Free owners (9): forward: 39711576, backward: 49867232
 Free locks (42409): forward: 65924212, backward: 23319052
 

 
 With best Regards,
 

 Patrick Friessnegg
 Synesc GmbH


Re: [firebird-support] Re: Composite index - issue or not existing feature?

2016-04-11 Thread liviusliv...@poczta.onet.pl [firebird-support]
>>No, the index entry does not include transaction information.  So yes, if 
>>you have a database that includes something
>>m to m relationship, the junctions records have to be read to validate 
>>them for the current transaction.  M to m
>>relationships are typically something like: students, registrations, and 
>>courses.  Each student registers for several
>>courses and each course has many students and the registration record 
>>consists only of a student id and a course
>>number (it's a junction record),  Firebird has to read the registration 
>>records.  This was done  knowingly because
>>keeping transaction information would greatly increase the size of index 
>>entries - one transaction id for the transaction
>>that created the record version with the value sought, plus one for the 
>>transaction that changed the value.  Obviously
>>that also increases the amount of I/O because modifying a key value 
>>modifies two index entries.  A fully mature
>>record (only one version) could skip both transaction ids, at the cost of 
>>even more I/O.

>>At one time, Firebird kept only one index entry for each value of a key 
>>for a record so if you typically modified a value
>>between A and B, then back to A, then back to B, you'd only have two index 
>>entries for that record.   That optimization
>>makes index garbage collection trickier and I think it was abandoned.  If 
>>it wasn't, obviously you'd need to have
>>separate index entries for each instance of a value in a record version 
>>chain to keep track of transaction ids.

>>It would be possible, I guess, to add a second index type that does keep 
>>transaction identifications so a designer
>>would be able to optimize indexes for junction records.  That's easy for 
>>me to say because I don't do that kind of
>>work anymore.

>>Cheers,
>>Ann

Hi Ann and others,

i read this once again and i see that not all is clear to me.
If i modify record without modifing fields with indexes - then i suppose 
index entry also must be modified
because it contain dbKey. And if i update record then we got new record 
version with new dbKey (once for transaction).
If this is true then i see that only benefit of curent index is size of 
whole index because it not contain transaction id
but I/O cost is not reduced.

I missed something?

regards,
Karol Bieniaszewski




[firebird-support] Newer null sort order and legacy Applications

2016-04-11 Thread Dalton Calford dcalf...@distributel.ca [firebird-support]
I have an cluster of older database formats (32bit firebird 1.5) and I wish to 
see about migrating them to a newer firebird 2.5 64 bit format.


We replicate using older custom C code on telco servers, so redesigning the 
replication code and getting it certified for use is a non-starter.


So, we have to use the existing technology, which, having been designed over 14 
years ago, has some logic holes that we have to make allowances for.


One of those holes happens to be a dependency upon null sort order.


The newer database engine returns nulls with a different precedence than the 
older 1.5 firebird.


I can not change the queries (the client replication apps) to change the null 
sort order.


So, I am wondering if there was some database or server specific configuration 
setting that would change the default null sort order to the older methods.


All ideas welcome.





[firebird-support] Case insensitive SIMILAR TO

2016-04-11 Thread Maxi maxiroba...@gmail.com [firebird-support]
Hi,

Which is the better approach to make a case insensitive SIMILAR TO query ?

Example:

CREATE TABLE LOOKUP_ARTICLE
(
  ID INTEGER NOT NULL,
  HEADLINE VARCHAR(100) NOT NULL,
  PUB_DATE TIMESTAMP NOT NULL,
  AUTHOR_ID INTEGER,
  CONSTRAINT INTEG_354 PRIMARY KEY (ID)
);


INSERT INTO "LOOKUP_ARTICLE" ("HEADLINE", "PUB_DATE", "AUTHOR_ID")
VALUES ('f', '2016-04-10 18:17:29.4297', NULL);

INSERT INTO "LOOKUP_ARTICLE" ("HEADLINE", "PUB_DATE", "AUTHOR_ID")
VALUES ('fo', '2016-04-10 18:17:29.4297', NULL);

INSERT INTO "LOOKUP_ARTICLE" ("HEADLINE", "PUB_DATE", "AUTHOR_ID")
VALUES ('foo', '2016-04-10 18:17:29.4297', NULL);

INSERT INTO "LOOKUP_ARTICLE" ("HEADLINE", "PUB_DATE", "AUTHOR_ID")
VALUES ('fooo', '2016-04-10 18:17:29.4297', NULL)

INSERT INTO "LOOKUP_ARTICLE" ("HEADLINE", "PUB_DATE", "AUTHOR_ID")
VALUES ('hey-Foo', '2016-04-10 18:17:29.4297', NULL);


Result set
121 f 10.04.2016, 18:17:29.429 [null]
122 fo 10.04.2016, 18:17:29.429 [null]
123 foo 10.04.2016, 18:17:29.429 [null]
124 fooo 10.04.2016, 18:17:29.429 [null]

Then (with SIMILAR TO 'fo*'),

SELECT  "LOOKUP_ARTICLE"."ID", "LOOKUP_ARTICLE"."HEADLINE",
"LOOKUP_ARTICLE"."PUB_DATE", "LOOKUP_ARTICLE"."AUTHOR_ID"
FROM "LOOKUP_ARTICLE"
WHERE "LOOKUP_ARTICLE"."HEADLINE" SIMILAR TO 'fo*'
ORDER BY "LOOKUP_ARTICLE"."PUB_DATE" DESC, "LOOKUP_ARTICLE"."HEADLINE" ASC

Return:
f
fo
foo
fooo

Now, I like to do an expresion to retrive:

f
fo
foo
fooo
hey-Foo

Any advice?

Regards
--
Maxi


Re: [firebird-support] How can I prevent that my database can be opened with another security.fdb.

2016-04-11 Thread 'Carlos H. Cantu' lis...@warmboot.com.br [firebird-support]













Re: [firebird-support] How can I prevent that my database can be opened with another security.fdb.

2016-04-11 Thread 'Thomas Steinmaurer' t...@iblogmanager.com [firebird-support]
> Dear all,
> If blocking SYSDBA from connecting the database is the goal, than ones can
> create a role with the name SYSDBA.Is there any minus about this?

AFAIR, this needs to be done with a direct system table insertion in RDB$ROLES 
with 2.5 and earlier. I don't know if this has changed in Firebird 3, but for 
sure, Firebird 3 will reject direct system table modifications at all.


--
With regards,
Thomas Steinmaurer
http://www.upscene.com

Professional Tools and Services for Firebird
FB TraceManager, IB LogManager, Database Health Check, Tuning etc.



Re: [firebird-support] How can I prevent that my database can be opened with another security.fdb.

2016-04-11 Thread Tjioe Hian Pin thp_p...@yahoo.com [firebird-support]
Dear all,
If blocking SYSDBA from connecting the database is the goal, than ones can 
create a role with the name SYSDBA.Is there any minus about this?
Regards,Tjioe