Re: [firebird-support] Re: Issue with large table in FB 1.5 and 2.5

2019-05-29 Thread Helen Borrie hele...@tpg.com.au [firebird-support]
> Also one further question  Do later versions of Firebird (ie. 3
> or 4) have any performance increase for cooperative garbage
> collection at all?

No.  GC requires reading the version chain of all records in the
contexts of their transactions.  It takes as long as it takes.

> Would I expect to see any performance improvement by any newer
> version,

No.
> or different server implementation here?

Possibly...with Superserver you can can configure for GC to run in
background;  or as a combination of background and co-op.  Its
performance would have to be tested with your machine's resources,
otherwise it's just one of those "How long is a piece of string?"
questions.

One idea that might give you some time saving (again, testing with
actual data is the only way you're going to find out) is to define
your table as a GTT with the same metadata as the one you are
currently using and the ON COMMIT PRESERVE ROWS property.  Remove the
dependencies from the current table and just use it for data collection.

Before you run your stored procs, feed the data from the collection
table into the GTT and have the SPs read from that instead.  Deleting
records and dropping the GTT is not in the picture, as the data in the
GTT will simply disappear when the connection detaches.  Then you can
just drop and recreate the collection table at will without affecting
any dependencies.

Of course, if you are still using Fb 1.5 for some sites, it's not
going to work for them.  GTTs didn't come on the scene till 2.1. Also,
it won't work in 2.1 +, either, if you have not upgraded the ODS of
the databases to 11.1 or higher.

Helen


---
This email has been checked for viruses by AVG.
https://www.avg.com



[firebird-support] Re: Issue with large table in FB 1.5 and 2.5

2019-05-29 Thread my...@techsol.org [firebird-support]
Also one further question  Do later versions of Firebird (ie. 3 or 4) have 
any performance increase for cooperative garbage collection at all?  Would I 
expect to see any performance improvement by any newer version, or different 
server implementation here? 

 Thanks
 Myles


[firebird-support] Re: Issue with large table in FB 1.5 and 2.5

2019-05-29 Thread my...@techsol.org [firebird-support]
>That is your solution - but *do not delete those records*. 

 Thank you for the detailed reply, Helen.  I can definitely execute a script to 
drop all dependencies and then drop the table, and then rebuild them back in 
via a CLI Linux script. I thought that was pretty extreme, but if this is the 
best practice here, I'll definitely try it.
 

 It is good to hear that what is happening is normal though.  It has been 
driving me mad trying to work out what is going on, so if the drop table is the 
secret (without any record deletion), I can make that happen.
 

 Thanks for your insight.  Priceless, as usual :)
 

 Myles


[firebird-support] Re: Issue with large table in FB 1.5 and 2.5

2019-05-29 Thread my...@techsol.org [firebird-support]
>To me this sounds like the cooperative garbage collection kicks in when 
you select from the table. You may want to consider dropping the table 
entirely and recreating it. Alternatively, switch to SuperServer and set 
its garbage collection policy to background (I believe this was 
introduced in Firebird 2.0 or 2.1, so won't help with Firebird 1.5). 

 Thanks Mark.  I have actually done a complete drop on the table once, but it 
has 27 stored procedures that are dependent on it, so they all have to be 
dropped too.  I did write a Linux script to do that, but after I had done it I 
had read that this might not make any difference to the garbage collection 
unless I do a backup & restore of it afterwards.  Does that make sense to you?
 

 Myles


Re: [firebird-support] Issue with large table in FB 1.5 and 2.5

2019-05-29 Thread Helen Borrie hele...@tpg.com.au [firebird-support]
[..]
> My code that loads the data commits the transactions after the
> DELETE statement, and then at each 5,000 rows loaded.  I have
> checked this by watching the load and using IBExpert to count
> records, so I can confirm that the commits are happening.

> After the loading has completed, what then occurs is strange.  Any
> attempt to interact with that table (and only that table) forces
> some form of sweep or some delay that shows up as a process on the
> server running at near 100% CPU, for about 4-5 hours.  Then it calms
> down and access to the table is pretty much instantaneous.  I
> thought that this might be a sweep that is occurring after the load,
> so I set the database to not auto sweep.

No, it is not sweeping, it is cooperative garbage collection and
that's exactly how coop GC works.  In the daily ebb and flow of your
production tables, it is barely noticeable but GC on those bulk
deletes will hit the next transaction that reads from your offending
table.

BTW, background GC (memntioned by Mark) is not available in Classic or
Superclassic, so configuring for this won't do anything.

> I have force dropped the  table, and rebuilt it.

That is your solution - but *do not delete those records*.

> I have done a backup with sweep and a restore to refresh the database.

That's extreme unless you do a daily backup-and-restore anyway...but
if you are restoring the database anyway, you don't need to have the
backup doing GC - use the -[no_]g[arbage_collect] switch.  On Linux
you would also need to make sure that subsequent connections are made
to the restored DB, not continuing to be done on the old one.

> Nothing seems to make any difference.

No, none of those things would make any difference as long as you are
deleting those records and thus creating a large number of old record
versions.

Incidentally, if this table is the only one you have that is building
up that level of old record versions, setting the sweep interval to 0
is the right thing to do, regardless.

Then, of course, you must ensure that backups (without the -g switch)
and/or sweeps are done often enough to keep the "gap" in a comfortable
state.  Also, you don't need to do *both* a backup (sans -g) and a
sweep, since the backup achieves the same thing as sweep at the end of
it all.  Of course, it is moot as to whether sweeping before the
backup, thus taking the GC out of gbak's hands, would gain you
anything in terms of the time this housekeeping would take.

Helen



---
This email has been checked for viruses by AVG.
https://www.avg.com



Re: [firebird-support] Issue with large table in FB 1.5 and 2.5

2019-05-29 Thread Mark Rotteveel m...@lawinegevaar.nl [firebird-support]
On 29-5-2019 16:19, my...@techsol.org [firebird-support] wrote:
> I am attempting to debug some strange behavior on a large table that 
> exhibits the same problems on both FB 1.5 Classic and FB 2.5 Classic.  
> This is on a CentOS 7 server, with a ton of RAM and disk, etc.
> 
> The database has about 150 tables in it.  One table is the culprit.  All 
> other tables, stored procedures, etc. work flawlessly and fast.  But 
> this one table, which is used to load about 1 million rows each day for 
> processing is problematic.  The table is large in terms of the number of 
> columns, yet only has about 6 indexes on it.  Since it is only used as a 
> "kill & fill" table to get data into the database, before that data is 
> then processed into its destination tables, I would consider it 
> temporary.  But unfortunately the data provider is unable to give me the 
> delta of what has changed each night, so I'm forced to delete all rows 
> in this table, then load them from this file.

[..]

> After the loading has completed, what then occurs is strange.  Any 
> attempt to interact with that table (and only that table) forces some 
> form of sweep or some delay that shows up as a process on the server 
> running at near 100% CPU, for about 4-5 hours.  Then it calms down and 
> access to the table is pretty much instantaneous.  I thought that this 
> might be a sweep that is occurring after the load, so I set the database 
> to not auto sweep.  I have force dropped the table, and rebuilt it.  I 
> have done a backup with sweep and a restore to refresh the database.  
> Nothing seems to make any difference.

To me this sounds like the cooperative garbage collection kicks in when 
you select from the table. You may want to consider dropping the table 
entirely and recreating it. Alternatively, switch to SuperServer and set 
its garbage collection policy to background (I believe this was 
introduced in Firebird 2.0 or 2.1, so won't help with Firebird 1.5).

Mark
-- 
Mark Rotteveel


Re: ODP: [firebird-support] Query that was very fast under the oldest 2.5 versions are very slow under 2.5.8 or 2.5.9

2019-05-29 Thread liviuslivius liviusliv...@poczta.onet.pl [firebird-support]
>>But this is not the problem, because 2.5.0 does this query very fast, with no 
>>special cache settings.Yes, i say the same in the answer ;-)"And about 
>>changed plan, it is not releated to this setting but this only increase 
>>visibility (of the issue) in your small cache settings"Regards,Karol 
>>Bieniaszewski
null

[firebird-support] Issue with large table in FB 1.5 and 2.5

2019-05-29 Thread my...@techsol.org [firebird-support]
I am attempting to debug some strange behavior on a large table that exhibits 
the same problems on both FB 1.5 Classic and FB 2.5 Classic.  This is on a 
CentOS 7 server, with a ton of RAM and disk, etc.
 

 The database has about 150 tables in it.  One table is the culprit.  All other 
tables, stored procedures, etc. work flawlessly and fast.  But this one table, 
which is used to load about 1 million rows each day for processing is 
problematic.  The table is large in terms of the number of columns, yet only 
has about 6 indexes on it.  Since it is only used as a "kill & fill" table to 
get data into the database, before that data is then processed into its 
destination tables, I would consider it temporary.  But unfortunately the data 
provider is unable to give me the delta of what has changed each night, so I'm 
forced to delete all rows in this table, then load them from this file.
 

 My code that loads the data commits the transactions after the DELETE 
statement, and then at each 5,000 rows loaded.  I have checked this by watching 
the load and using IBExpert to count records, so I can confirm that the commits 
are happening.
 

 After the loading has completed, what then occurs is strange.  Any attempt to 
interact with that table (and only that table) forces some form of sweep or 
some delay that shows up as a process on the server running at near 100% CPU, 
for about 4-5 hours.  Then it calms down and access to the table is pretty much 
instantaneous.  I thought that this might be a sweep that is occurring after 
the load, so I set the database to not auto sweep.  I have force dropped the 
table, and rebuilt it.  I have done a backup with sweep and a restore to 
refresh the database.  Nothing seems to make any difference.
 

 This is what database statistics are showing for the database:
 Database header page information:
 Flags   0
 Checksum  12345
 Generation  995313
 Page size   16384
 ODS version 10.1
 Oldest transaction  93188
 Oldest active   990357
 Oldest snapshot 990354
 Next transaction995298
 Bumped transaction  1
 Sequence number 0
 Next attachment ID  0
 Implementation ID   19
 Shadow count0
 Page buffers0
 Next header page0
 Database dialect3
 Creation date   May 8, 2019 23:37:46
 Attributes  force write
 

 Variable header data:
 Sweep interval: 0
 *END*
 

 This is what statistics show for that one table:
 CLAIM_LOAD (276)
 Primary pointer page: 711, Index root page: 712

 Average record length: 1475.73, total records: 962662

 Average version length: 0.00, total versions: 0, max versions: 0

 Data pages: 93152, data page slots: 116597, average fill: 94%

 Fill distribution:

  0 - 19% = 1

 20 - 39% = 0

 40 - 59% = 0

 60 - 79% = 0

 80 - 99% = 93151

 

 Can anyone see any issues with these statistics, or have any suggestions as to 
what may be going on?  Is there anything that can be seen in logs for this 
mysterious "after load" behavior I'm seeing?
 

 Thanks in advance for any suggestions.
 

 Myles


Re: ODP: [firebird-support] Query that was very fast under the oldest 2.5 versions are very slow under 2.5.8 or 2.5.9

2019-05-29 Thread Dimitry Sibiryakov s...@ibphoenix.com [firebird-support]
29.05.2019 13:37, danysch...@yahoo.com [firebird-support] wrote:
> The very important diference is the plan. 2.5.8  is not using MOVI_FECH 
> index, so it must 
> read more than one millon records to analize ("MOVI"."FECH" between 
> '20190301' and 
> '20190412').

   Remove "DISTINCT" from subqueries. It is pointless there in any case.


-- 
   WBR, SD.






++

Visit http://www.firebirdsql.org and click the Documentation item
on the main (top) menu.  Try FAQ and other links from the left-side menu there.

Also search the knowledgebases at http://www.ibphoenix.com/resources/documents/ 

++


Yahoo Groups Links

<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/firebird-support/

<*> Your email settings:
Individual Email | Traditional

<*> To change settings online go to:
http://groups.yahoo.com/group/firebird-support/join
(Yahoo! ID required)

<*> To change settings via email:
firebird-support-dig...@yahoogroups.com 
firebird-support-fullfeatu...@yahoogroups.com

<*> To unsubscribe from this group, send an email to:
firebird-support-unsubscr...@yahoogroups.com

<*> Your use of Yahoo Groups is subject to:
https://info.yahoo.com/legal/us/yahoo/utos/terms/



Re: ODP: [firebird-support] Query that was very fast under the oldest 2.5 versions are very slow under 2.5.8 or 2.5.9

2019-05-29 Thread danysch...@yahoo.com [firebird-support]
I added this: 

 http://tracker.firebirdsql.org/browse/CORE-6070 
http://tracker.firebirdsql.org/browse/CORE-6070

 

 Thank you.


Re: ODP: [firebird-support] Query that was very fast under the oldest 2.5 versions are very slow under 2.5.8 or 2.5.9

2019-05-29 Thread danysch...@yahoo.com [firebird-support]
Hi Karol; 

 I was investigating what is doing the query optimizer about the MOVI_FECH 
index.
 

 I found that using "MOVI.FECH between ...20190301 and 20190412" does not use 
MOVI_FECH index. 
 

 I also found that if I do "MOVI.FECH >= 20190301  and MOVI.FECH >= 20190412" 
does not use MOVI_FECH index.
 

 The only expresion that use the MOVI_FECH index is when I do "MOVI.FECH = 
20190412".
 

 This is the problem, I guess.
 

 I will report the case.
 

 



Re: ODP: [firebird-support] Query that was very fast under the oldest 2.5 versions are very slow under 2.5.8 or 2.5.9

2019-05-29 Thread danysch...@yahoo.com [firebird-support]
Hi Karol; 

 But this is not the problem, because 2.5.0 does this query very fast, with no 
special cache settings. The very important diference is the plan. 2.5.8  is not 
using MOVI_FECH index, so it must read more than one millon records to analize 
("MOVI"."FECH" between '20190301' and '20190412'). This is the MOVI ddl:
 

 CREATE TABLE MOVI
 (
   MOVI Char(10) NOT NULL COLLATE ES_ES_AI,
   PROC Char(10) NOT NULL COLLATE ES_ES_AI,
   MIEM Char(10) NOT NULL COLLATE ES_ES_AI,
   TIPO Char(1) NOT NULL COLLATE ES_ES_AI,
   AVIS Char(3) NOT NULL COLLATE ES_ES_AI,
   HECH Char(1) NOT NULL COLLATE ES_ES_AI,
   FECH Char(8) NOT NULL COLLATE ES_ES_AI,
   HORA Char(4) NOT NULL COLLATE ES_ES_AI,
   DSCR Varchar(200) NOT NULL COLLATE ES_ES_AI,
   FOJA Varchar(10) NOT NULL COLLATE ES_ES_AI,
   PRES Char(1) NOT NULL COLLATE ES_ES_AI,
   REPI Char(1) NOT NULL COLLATE ES_ES_AI,
   OWNR Char(10) NOT NULL COLLATE ES_ES_AI,
   TEXT Blob sub_type 0,
   "EDIT" Char(23) NOT NULL COLLATE ES_ES_AI,
   CONSTRAINT MOVI_PK PRIMARY KEY (MOVI)
 );
 CREATE INDEX MOVI_FECH ON MOVI (FECH,HORA);
 CREATE INDEX MOVI_HECH ON MOVI (HECH);
 CREATE INDEX MOVI_MIEM ON MOVI (MIEM);
 CREATE INDEX MOVI_OWNR ON MOVI (OWNR);
 CREATE INDEX MOVI_PRES ON MOVI (PRES);
 CREATE INDEX MOVI_PROC ON MOVI (PROC);
 CREATE INDEX MOVI_REPI ON MOVI (REPI);
 CREATE INDEX MOVI_TIPO ON MOVI (TIPO);
 

 Thank you.




Re: ODP: [firebird-support] Query that was very fast under the oldest 2.5 versions are very slow under 2.5.8 or 2.5.9

2019-05-29 Thread liviuslivius liviusliv...@poczta.onet.pl [firebird-support]
>>I enabled in firebird.conf the parameter DefaultDbCachePages = 2048, >>but 
>>after that the performace was 1 minute slower.Yes, i am talking about this 
>>param you can also set it indyvidually for database. And your value is small. 
>>If your db pagesize is 16KB then cache in your case is only 32MB which is 
>>value of cache from windows 3.1 ;-) days? If yor page sise is 4KB than it is 
>>8MB...And about changed plan, it is not releated to this setting but this 
>>only increase visibility in your small cache settings.Regards,Karol 
>>Bieniaszewski
null