Re: A thought on re-indexing of a large database after repairs, compacting etc

2018-04-23 Thread Paul Lovejoy via 4D_Tech
We have SSD. A high speed RAID is another story. > Le 23 avr. 2018 à 10:59, Wayne Stewart via 4D_Tech <4d_tech@lists.4d.com> a > écrit : > >> I don’t have the luxury of installing an SSD RAID. > > A 512GB SSD is less than $200 >

Re: A thought on re-indexing of a large database after repairs, compacting etc

2018-04-23 Thread npdennis via 4D_Tech
> You know the old saying: “A watched pot never boils.” > Watching 4D reload the same table with 7 million records 8 times to generate > 8 indexes is kind of like that. Later versions of 4D optimized the index rebuilding by caching the records and doing all of the indexes on table at a time.

Re: A thought on re-indexing of a large database after repairs, compacting etc

2018-04-23 Thread Wayne Stewart via 4D_Tech
> I don’t have the luxury of installing an SSD RAID. A 512GB SSD is less than $200 https://www.amazon.com/Samsung-500GB-Internal-MZ-76E500B-AM/dp/B0781Z7Y3S/ or just over if you go for the pro version with the 5 year warranty

Re: A thought on re-indexing of a large database after repairs, compacting etc

2018-04-23 Thread Arnaud de Montard via 4D_Tech
> Le 23 avr. 2018 à 07:51, Paul Lovejoy via 4D_Tech <4d_tech@lists.4d.com> a > écrit : > > Chuck, > > You know the old saying: “A watched pot never boils.” > Watching 4D reload the same table with 7 million records 8 times to generate > 8 indexes is kind of like that. > > I don’t have the

Re: A thought on re-indexing of a large database after repairs, compacting etc

2018-04-22 Thread Paul Lovejoy via 4D_Tech
Chuck, You know the old saying: “A watched pot never boils.” Watching 4D reload the same table with 7 million records 8 times to generate 8 indexes is kind of like that. I don’t have the luxury of installing an SSD RAID. I’m sure better hardware would help. Paul > Le 23 avr. 2018 à

Re: A thought on re-indexing of a large database after repairs, compacting etc

2018-04-22 Thread Chuck Miller via 4D_Tech
OK I don;t get it. We have a data base of over 200 Gig with many millions of records. We run using SSD,s in an Areca RAID. 1 Terabyte SSDs. I can tell you that when we restored from a backup up and rebuilt indices it took not more than 20 minutes to so. It does not matter if it is by table or

Re: A thought on re-indexing of a large database after repairs, compacting etc

2018-04-22 Thread Arnaud de Montard via 4D_Tech
> Le 22 avr. 2018 à 11:08, Paul Lovejoy via 4D_Tech <4d_tech@lists.4d.com> a > écrit : > > Hi, > > I guess our database of 120gb, running on a 32 bit machine doesn’t benefit > from this. We were working on the most recent R release and yesterday we went > to 15.5. > When reindexing the

Re: A thought on re-indexing of a large database after repairs, compacting etc

2018-04-22 Thread Keisuke Miyako via 4D_Tech
15.5 maybe a newer release than 15Rx, but it is an older branch. 15.0 ➡︎ 15.1 ➡︎ 15.2 ➡︎ 15.3 ➡︎ 15.4 ➡︎ 15.5 ┗15R2 ┗15R3 ┗15R4 ┗15R5 ┗16.0 ➡︎ 16.1 ➡︎ 16.2 ➡︎ 16.2 left to right: bug fixes top to bottom: new features 2018/04/22 18:08、Paul Lovejoy via 4D_Tech

Re: A thought on re-indexing of a large database after repairs, compacting etc

2018-04-22 Thread Paul Lovejoy via 4D_Tech
Hi, I guess our database of 120gb, running on a 32 bit machine doesn’t benefit from this. We were working on the most recent R release and yesterday we went to 15.5. When reindexing the entire database, 4D Server/4D are still going index by index instead of table by table. Or maybe I missed

Re: A thought on re-indexing of a large database after repairs, compacting etc

2018-04-22 Thread Paul Lovejoy via 4D_Tech
Hi Jeff, We are on 15.5 now. I don’t see any R releases which are more recent. v15.5 does not do this… Paul > Le 21 avr. 2018 à 17:06, Jeffrey Kain via 4D_Tech <4d_tech@lists.4d.com> a > écrit : > > 4D implemented this very improvement in the v15 R releases. > > -- > Jeffrey Kain >

Re: A thought on re-indexing of a large database after repairs, compacting etc

2018-04-21 Thread Keisuke Miyako via 4D_Tech
you can read about it in the 15R4 upgrade ref. ftp://ftp-public.4d.fr/Documents/Products_Documentation/LastVersions/Line_15R4/VIntl/4D_Upgrade_v15_R4.pdf p.62 > In 4D v15 R4, we have greatly optimized the algorithm for global reindexing > of a database. The whole process has been dramatically

A thought on re-indexing of a large database after repairs, compacting etc

2018-04-21 Thread Paul Lovejoy via 4D_Tech
Hi, I’m working with a pretty big database, with about 120gb of data and about 45 million records over about 250 tables. This database has been in use for about 20 years and is growing ever faster. I guess you could call this a gripe but I don’t understand why, if a table has several indexes,