Thanks Bob. I think my problem is related to duplicate records in multiple tables.
Unfortunately, the design of this db was poor, and input of data (CSV files and the LOAD command) is not checked thoroughly enough to avoid this. I'm now experimenting on a backup of the db and the DELETE DUPLICATES command in an attempt to clean this up. I have yet to determine just how that command behaves, since in the documentation it says "This command deletes all but the first row for each set of duplicate rows." I can't guess how this will delete (Maybe ORDER BY?), since my data is formed thusly: prec_type | case# | name | address | city | state | zip | etc | etc | What happens is a file is sent us with updated demographic information, and it seems the only unique information is the client#. Anything else in the record could change, the address could get updated, or added, etc. What I need to do is cull the least amount of data present in each duplicate case# record. What I've also found is that there are no indexes, fk, pk to speak of, so I have no clue as to how R:Base decides on table relationships, since there are no views, simply a form that pulls from these tables. Since case# exists in each table, I assume that this is it, but since DESCRIPTION and a few other commands I've found that help me in other rdbms seem to be absent, or an Access-like Relationship viewer, I'm in the dark. Since I have an MS SQL Server and available, and am comfortable using VB, I may create a CSV dump of all tables, use VB to clean it up, and DTS to get it into SQL, then create my GUI with ASP. It's only read-only data from the users perspective, and this may make it a bit easier to deploy. Thanks for your input. On Fri, Jul 18, 2008 at 8:16 AM, <[EMAIL PROTECTED]> wrote: > John, > Even with 6.5, you are not approaching any limits nor the number of records > that > would cause speed issues. > (There are SystemV installs still running today that have hundreds of > thousands of records without speed issues.) > > Make a backup and UNLOAD / RELOAD your database. That will reclaim > (compact) any phsyical disk space spread throughout your database and it > will recreate you indexes. This can make a significant difference. A PACK > command > can do similar, but make sure you have a back up and plenty of hard drive > space. > > Then you should look at your indexes you have applied. For instance, > applying an > index to a column that has only 2 unique values over 35,000 records does not > do > much good. Having indexes on too many columns (I have seen cases where > people put > indexes on all columns) will actually cause the system to run slower. > Views can > be very much effected by proper indexing. > > You can rest assured that in your case it is not the number of tables or > records causing a speed issue. > > -Bob > > -- > Thompson Technology Consultants > LaPorte, IN 46350 > 219-363-7441 > > > -------------- Original message -------------- > From: "John Croson" <[EMAIL PROTECTED]> > >> Unfortunately, I'm using 6.5+. >> >> On Fri, Jul 18, 2008 at 3:04 AM, A. Razzak Memon wrote: >> > At 03:06 PM 7/17/2008, John Croson wrote: >> > >> >> Is there a maximum record / table count for RBase? >> > >> > John, >> > >> > Here's a comprehensive chart to explain it all. >> > >> > http://www.rbase.com/products/compare.php >> > >> > Very Best R:egards, >> > >> > Razzak. >> > >> > >> > >> >> >> >> -- >> John Croson >> [EMAIL PROTECTED] >> http://pcnorb.blogspot.com/ >> http://pcnorb.homelinux.org/ >> >> -- John Croson [EMAIL PROTECTED] http://pcnorb.blogspot.com/ http://pcnorb.homelinux.org/

