Hi Sami,
Thanks for your input. Believe it or not I may not have any forms at all in
the database. The data is retrieved from an external file. After the data has
been added, the calculations are run, and the reports are printed. There is
very little user interaction except to start the process. As stated in other posts,
I will try the single table approach first and see how it performs. I am
expecting in excess of 2 million rows in the table.
Best regards,
Mike Young
On Wed, 26 Sep 2001 16:56:55 -0500, Sami Aaron wrote:
>Mike -
>
>I did a database once that had about 30 sets of three similar tables that
>was used to produce a book. Each set of three tables produced a different
>section of the book. These tables were not _exactly_ the same but similar.
>
>For each set of three tables, I had a data-entry form, a proofing report, a
>three-table view, a special view for data exporting, and some other stuff.
>I was very careful to use special naming conventions for each object - for
>example, the default form and report were named the same as the main
table
>in the set of three tables. I created a control table just to manage things
>like the default sort order for each table and report, the name of the menu
>block and application block for this section, the name of the program that
>would run the Ventura output files, and so on.
>
>The largest table has only about 7,000 rows, but there are 250+ tables in
>the database.
>
>Would I do this design this way again? Well, I don't think I have any
>alternative since the variable fields in each section are really unique and
>the final printed output differs dramatically in some cases. HOWEVER, I
>will stipulate that it was the MOST TEDIOUS and time consuming project
I've
>ever encountered. I cannot imagine having 700 distinct tables if there is
>any other possible way to manage the data structure! I spent quite a bit of
>time preparing checklists for each of the 30 sections to confirm that I had
>installed some change or process on each one.
>
>And, to boot - in the DOS version (where my application was originally
>created) you could search for and edit info in forms and reports much easier
>than you can in the Windows version. I could find all instances of a pop-up
>to a specific table, for instance and change the sort order on the pop-up
>right in the sys_forms table - can't do those kinds of things in the Windows
>forms. You, my friend would have to open, edit and save each form ....
one
>at a time ... it gives me a headache just remembering it.
>
>Just my 50 cents worth!
>
>Sami
>
>-----------------------------------------------------------
>Sami Aaron
>Software Management Specialists
>13214 W. 62nd Terr, #139
>Shawnee KS 66216
>913-915-1971
>http://www.softwaremgmt.com
>
>
>----- Original Message -----
>From: "Lawrence Lustig" <[EMAIL PROTECTED]>
>To: <[EMAIL PROTECTED]>
>Sent: Wednesday, September 26, 2001 11:24 AM
>Subject: RE: Database limits
>
>
>> > I thought about this but the table would end up with
>> > millions of rows of data
>> > and the only useful index would have at least 700
>> > duplicates and I am not
>> > sure how efficient that would be.
>>
>> Mike:
>>
>> There's a saying in programming: "Premature
>> optimization is the root of all evil". Make up a test
>> database and see what the response is like. Don't
>> make a messy database design unless you _know_ that
>> you can't write your project with a clean and easily
>> maintainable design. Also, don't let your design be
>> too heavily influenced by what the client's data looks
>> like on paper, or any limitations they may have run
>> across trying to represent the in Excel or Access.
>>
>> I have a database with a large 1.5 million row table
>> against which complex GROUP BY selects and JOINS are
>> done in Oterro, and the response is perfectly
>> acceptable for on-line querying.
>>
>> As for your indexing described above, it seems to me
>> that if each row belongs to only one "subgroup" and
>> each row has a unique column value within that
>> subgroup, then a key built on those two columns will
>> uniquely identify the row.
>>
>> > Speed is going to be important. Every day there are
>> > around 120 complex
>> > calculations made for each table with the results
>> > being stored in the table. The
>> > report analysis is a whole other very complex
>> > operation.
>>
>> Consider the possiblity of holding all the data in one
>> table and then, if the calculations in the one table
>> are too slow, doing the following:
>>
>> 1) Identify the subset of records for the calculation.
>> 2) Put them in a temporary table.
>> 3) Perform the calculations.
>> 4) Update the master table.
>>
>> > I guess at this point I could ask what is the
>> > maximum number of rows that a
>> > table can accept and at what point does it become
>> > too unwieldy.?
>>
>> Putting the rows in a single table will never be
>> nearly as unwieldy as trying to maintain 700 identical
>> tables. It might be a little _slower_ in terms of
>> retrieval, but almost certainly not slower enough to
>> eliminate the ease of programming.
>> --
>> Larry
>>
>> __________________________________________________
>> Do You Yahoo!?
>> Get email alerts & NEW webcam video instant messaging with Yahoo!
>Messenger. http://im.yahoo.com
>>
>>
>>
>