I think you will be better off just searching the table.
You are not going to accomplish anything by using a temp table here unless
you are able to significantly reduce the number of rows in the table.  You
will have to create the new table and index it which will take some time to
do, and your searches won't be any faster because you will still have to
search all 80,000 rows.

Depending on the searches, maybee you can create a computed column in your
table that uses the ihash function to speed up your searches.

Indexes do not put things in RAM, they allow you to do fast searches by
doing the equivelent of keeping a copy of that field in a sorted list. (Have
you ever noticed when using the designer when defining indexes on text
fields, it asks how many characters you want to index?)  When you have a
sorted list and you need to find something in it instead of doing a
sequential search starting at the top and go until your value is found, you
start in the middle, decide if what you are looking for is greater or
smaller that the middle value.  Then you take either the top or bottem half
of the list and do the same thing until your values is found.  This speeds
your search up by an expontial factor because on each data access you cut
the list in 2.  I beleive that the R:base indexes are even more
sophisticated than this, but that really doesn't matter here.  Is what I am
getting to is that if you are searching the same 80,000 rows, it does not
make any difference if the table has 2 fields or 200 fields, it is the
lenght of the list that makes a difference.

The problem with searching long text values is that the indexes are big and
you have to search a lot of characters to get a unique string.  This makes
for inefficient indexes.  Try this in an old version or R:base pre 4.5++ and
large text searches were really slow.  This is where the IHASH function
comes in.  It can take several characters and return an integer for them.
Now you need to know a little about what data is in this field and how
unique the first few characters are.

Read up in the IHASH function and maybee you want to add a computed field to
your table using the IHASH function and then index that.  Add that criteria
to your search string using the IHASH function comparing it to the IHASH
field in the table

Troy


-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of David M.
Blocker
Sent: Tuesday, September 09, 2003 6:47 PM
To: RBASE-L Mailing List
Subject: [RBASE-L] - Re: RBG65 - To Project of not to Project


I find that it's best not to generalize.

I'd suggest trying it in tests both ways and testing the times!  Then let us
know what you found out!

David Blocker

----- Original Message -----
From: "Jim Limburg" <[EMAIL PROTECTED]>
To: "RBASE-L Mailing List" <[EMAIL PROTECTED]>
Sent: Tuesday, September 09, 2003 4:04 PM
Subject: [RBASE-L] - RBG65 - To Project of not to Project


> G-Day all
>
> Which situation by you all's experience would be faster. I have a table
> which had about 30 columns, most of which are small - integers,1-8 size
> in text and so on. One of the columns is text 100. This table has about
> 80,000 rows in it. The text 100 column is indexed, but searching is
somewhat
> slow. The table has a PK - integer. Would it be faster to project a
> temp table to a user who searches if frequently, of just the PK and the
> text 100 column. The user who does this searching wouldn't need it updated
> each time.  I could put a refresh data button on to reload the temp table,
> so the user could do that a few times a day.
>
> Or is it just faster to search on the table that has the 30 columns. I
> know indexing is suppose to "put things in ram" so to speak, or at least
> that's how I've figured it, so I'm not sure at this point what would be
> best. I'm curious to find out some of you guru's findings in areas like
> this.
>
> Jim Limburg
>
>

Reply via email to